Refine
Year of publication
Document Type
- Doctoral Thesis (600) (remove)
Has Fulltext
- yes (600)
Is part of the Bibliography
- no (600)
Keywords
- Quark-Gluon-Plasma (8)
- Schwerionenphysik (8)
- CERN (5)
- Heavy Ion Collisions (5)
- Ionenstrahl (5)
- LHC (5)
- Monte-Carlo-Simulation (5)
- Quantenchromodynamik (5)
- Schwerionenstoß (5)
- Teilchenbeschleuniger (5)
Institute
- Physik (600) (remove)
With the technological advancements over the past years, structure determination and prediction for membrane proteins have become easier. While those approaches give snapshots of one or more conformational states of the protein, complementary techniques are necessary to elucidate the conformational space and transition between states during function. Electron paramagnetic resonance (EPR) spectroscopy is a powerful tool for addressing these aspects. In this thesis, site-directed spin labeling and pulsed electron-electron double resonance (PELDOR) spectroscopy combined with various biochemical tools were used to explore the conformational heterogeneity of the β-barrel assembly machinery (BAM) complex under in vitro and in situ conditions. The BAM complex present in the outer membrane (OM) of gram-negative bacteria is responsible for the folding and insertion of the outer membrane proteins (OMPs). As the majority of OMPs depend on the BAM complex for their biogenesis, it is one of the most essential components for the cell and hence a potential target for new antibiotics. BAM is a heterooligomeric complex composed of BamA, BamB, BamC, BamD, and BamE subunits. BamA is the central transmembrane protein directly involved in the folding and insertion process. The periplasmic regions of BamA are scaffolded by BamB-E lipoproteins. Available structures of the BAM complex reveal a highly dynamic behaviour. BAM complex is also highly intertwined with the complex membrane environment and is hypothesized to be dependent on the asymmetric bilayer for its function. The functional relevance of the accessory lipoproteins or how BAM recruits and folds diverse OMPs remains elusive.
The thesis examines the membrane bilayer dependence of the BAM complex and the role of the lipoproteins in the conformational cycling of BamA. By comparing the conformational states of the central component BamA in detergent micelles and isolated native outer membranes, it is demonstrated that the native bilayer helps BamA attain multiple conformational states. In the native outer membrane environment, BamA exhibits greater flexibility than observed in the detergent micelles. Further, the conformational dynamics of BamA were explored in different subcomplexes in detergent micelles. The binding of BamCDE subcomplex creates specific changes in BamA at the lateral gate, periplasmic regions, and extracellular loops leading to a lateral-open state. BamB alone does not induce any changes in BamA, revealing that it might play an accessory role in the function of the complex. The results demonstrate that BamCDE plays a key regulatory role in the lateral gating mechanism of BamA. Additionally, the spin labeling and PELDOR spectroscopy were optimized for the extracellular loops of the full complex in intact E. coli cells. The data validates the conformational states of the complex observed in the detergent micelles. However, the distance distributions show increased dynamics, especially at the lateral gate region in the cellular environments. The increased heterogeneity might be due to the presence of the asymmetric membrane, lipopolysaccharides, or substrate interactions. Overall, the thesis answers key questions on the conformational dynamics of BamA and delineates the role of lipoproteins in the folding mechanism. It also provides new opportunities to study the functional mechanism of BAM under physiologically relevant conditions by performing experiments in native outer membranes and intact E. coli cells.
Within the FAIR Phase-0 programm at GSI Helmholtzzentrum für Schwerionenforschung the Coulomb dissociation of 16O into 12C and 4He was measured. A 16O primary beam with an energy of 500 AMeV was impinging on lead, carbon, and tin targets at the R3B Cave C, and the fragments were detected. The beam intensity was set to several 10^9 ions per second, which made radical changes to the standard R3B setup necessary. All detectors were produced with holes or variable gaps, in order to let unreacted beam particles pass. New detectors were built to cope with the expected high number of particles that need to be detected. All detectors are based on the detection of scintillating light.
In the Coulomb field of the target atoms the 16O ions can experience an excitation, which may lead to a breakup reaction into lighter fragments. A calorimeter around the target helps with the identification of unwanted contributions to the cross-section from excited states. The fragments pass the first set of fiber detectors and are deflected in a dipole field. The reconstruction of the particle tracks is performed with Runge-Kutta algorithms. Ultimately, this enables the determination of the excitation energy in the center-of-mass of the excited 16O nucleus. The resulting spectrum describes the cross-section of the reaction.
By comparing the experimental data with simulated events new insights into the fusion reaction, as it takes place during the helium-burning phase in stars, is possible. The analysis of the experiment is still ongoing. However, the first results show, that this experiment can provide data in an energy range never measured before. This will help to understand the cross-section and the astrophysical S-factor of this reaction.
This thesis is situated in the field of frustrated magnetism, a subfield of condensed matter physics that describes the magnetic degrees of freedom in solids. Extended Kitaev models describe a particular class of materials where spin-orbit coupling, combined with effects from crystal field theory and strong electronic correlations, leads to effective magnetic interactions that are highly anisotropic. Such interactions can give rise to exotic physics, such as the emergence of a quantum spin liquid. In this thesis, extended Kitaev models are studied theoretically, primarily using numerical methods.
A heavily investigated Kitaev candidate material is α-RuCl3, where a key question has been centered on the possible existence of a magnetic-field-induced quantum spin liquid. While numerous experimental studies have uncovered various unconventional phenomena in this material and suggested different interpretations of the underlying physics, this thesis provides a comprehensive comparison and explanation of these phenomena within one consistent theoretical framework. Aside from purely magnetic properties, an additional focus lies on magnetoelastic effects, in which the coupling of the crystal lattice to the anisotropic spin system has to be considered.
Beyond α-RuCl3, a number of more recently introduced Kitaev candidate materials are investigated theoretically. This includes the materials RuBr3 and RuI3, whose layered honeycomb crystal structures resemble α-RuCl3 but heavier ligands lead to different spin-orbit coupling effects, as well as NaRuO2, which realizes a triangular-lattice structure.
In this dissertation, we look at environmental effects in extreme and intermediate mass ratio inspirals into massive black holes. In these systems, stellar mass compact objects orbit massive black holes and lose orbital energy due to gravitational wave emission and other dissipative forces. We explore environmental interactions with dark matter spikes, stellar distributions, accretion disks, and combine and compare them. We discuss the existence and properties of dark matter spikes in the presence of these environmental effects. The signatures of the environmental effects, such as the phase space flow, dephasing, deshifting of the periapse, and alignment with accretion disks, are examined. These signatures are quantified in isolated spike systems, in dry, and in wet inspirals. We generally find dark matter effects to be subdominant to the other environmental effects, but their impact on the waveform is still observable and identifiable. Lastly, the rates of inspirals and the impact of spikes are estimated. All of these results are obtained with the help of a code imripy that is published alongside. If dark matter spikes exist, they should be observable with space-based gravitational wave observatories.
In dieser Dissertation werden die Erfahrungen mit verschiedenen Präparationsmethoden für CH-Kavitäten beschrieben, um die Leistung der Kavitäten nach der Herstellung weiter zu steigern. Die Leistung wird anhand von zwei wichtigen HF-Parametern bewertet:
dem elektrischen Feld Ea und der intrinsischen Güte Q0. Im Gegensatz zu normalleitenden (NC) Kavitäten kann die intrinsische Güte von supraleitenden (SC) Kavitäten mit zunehmendem elektrischem Feld erheblich variieren. Das optimale Ergebnis für die Kavitätenpräparation ist die Erhöhung des maximalen elektrischen Feldes unter Beibehaltung eines höheren Q0 über die gesamte Feldspanne. Da Q0 umgekehrt proportional zu den Kavitätsverlusten ist, reduziert eine Erhöhung des Qualitätsfaktors die Kryoverluste für den Betrieb bei gegebenem Feldniveau. Die Entwicklung der Kavitätenperformanz im Verlauf dieser Arbeit dargestellt.
Die meisten SC-Kavitäten sind elliptische Strukturen, welche bei hoher Geschwindigkeit und Tastrate angewendet werden. Die Präparationsmethoden wurden daher überwiegend auf diese Strukturen angewandt und optimiert. Diese Arbeit konzentriert sich auf die Umsetzung der zuverlässigsten und vielversprechendsten Oberflächenbehandlungen mittels des ersten vom IAP entwickelten SC 360MHz CH-Prototyps. Diese Kavität wies nach 11 Jahren Lagerung eine verminderte Leistung auf, welche mit Röntgenstrahlung bei bereits niedrigen elektrischen Feldern einherging. Dies deutet auf eine unbeabsichtigte Belüftung mit normaler Luft hin, durch die Partikel eingeführt wurden, die als verstärkende Quellen von Elektronen fungierten. Außerdem musste der Leistungskoppler aufgrund einer starken Überkopplung neu ausgelegt werden.
Die Kavität wurde für 48 Stunden bei 120◦ C mittels Heizbändern in der Experimentierhalle des IAP’s ausgeheizt, was zu einer Verbesserung des Qualitätsfaktors bei niedrigen Werten und zu einer Verkürzung der für die Konditionierung von Multipacting-Barrieren erforderlichen Zeit führte. Allerdings wurde durch diese Behandlung das maximale erreichbare elektrische Feld weiter verringert. Die Verbesserung der Güte ist auf das Ausgasen der Kohlenwasserstoffe während des Backvorgangs zurückzuführen. Die negative Auswirkung auf das maximale elektrische Feld ist weniger auf das Backen selbst zurückzuführen als auf den Transport der Kavität und die verwendeten Vakuumkomponenten, die in der Versuchshalle gelagert sind.
Die beobachtete Leistungseinschränkung lässt sich hauptsächlich durch Partikel im Inneren des Resonators erklären, da Feldemission bei niedrigen Feldstärken auftrat. Eine Hochdruckspülung mit ultrareinem Wasser (HPR) ist das Standardverfahren, um nach Behandlungen, bei denen das Risiko einer Oberflächenkontamination besteht, eine hohe Reinheit der inneren Oberflächen zu erreichen. Die HPR wurde in Zusammenarbeit mit dem Helmholtz-Institut-Mainz und der Gesellschaft für Schwerionenforschung geplant und durchgeführt. Der Resonator zeigte bereits während der Messung der Q-E-Kurve eine Zunahme der transmittierten Leistung bei konstanter Vorwärtsleistung, was vor der HPR nicht der Fall war. Bei der CW-HF-Konditionierung zeigte die Kavität den höchsten Gradienten bei einem deutlich schwächeren Q-Abfall bei hohen Feldstärken.
Sowohl bei der Messung von 2008 als auch bei der beschriebenen Messung wurde die Kavität mit einer HPR-Behandlung fertiggestellt, aber für die HPR-Behandlung bei HIM in Mainz wurden einige Anpassungen vorbereitet. Der CH Prototyp verfügt über keine zusätzlichen Spülports und wurde daher mit zwei verschiedenen Düsen mit unterschiedlichen Sprühwinkeln gespült, um die erreichbaren inneren Resonatorflächen zu maximieren. Die Verwendung mehrerer Sprühwinkel könnte auch für CH-Kavitäten mit Spülöffnungen von Vorteil sein und sollte für zukünftige HPR-Anwendungen in Betracht gezogen werden.
Die Heliumbehandlung wurde am CH-Prototyp 2,5 Stunden lang durchgeführt und lieferte vielversprechende Ergebnisse in Bezug auf die Güte und die Gradientenoptimierung.
Während dieses Prozesses wurde die emittierte Röntgenstrahlung in Richtung am Arbeitsplatzs gemessen und zeigte starke zeitabhängige Fluktuationen. Dies deutete auf die Beseitigung von Partikeln hin und wurde anschließend durch einen Anstieg des elektrischen Feldes von 8,4 auf 8,7 MV/m bestätigt. Eine unerwartete Auswirkung wurde bei der Q-Steigung im mittleren bis hohen Feld festgestellt, bei der der Qualitätsfaktor im Vergleich zum HF-konditionierten Fall eine Erhöhung von 5% oberhalb von 2MV/m aufwies. Dieser systematische Anstieg wurde für diesen Beschleuniger vor der Behandlung bisher nicht beobachtet. Stickstoffgedopte Kavitäten zeigen ein ähnliches Verhalten, bei dem Wechselwirkungen innerhalb der Oxidschicht mit Änderungen der Qualitätsfaktoren korreliert sind. Da Helium ein nicht reaktives Element ist, sind mögliche Erklärungen für diesen Effekt der Sputterprozess und die Einlagerung von Helium innerhalb der Oberfläche. Eine Serie von Heliumbehandlungen ist geplant, um ein optimiertes und sicheres Rezept für CH-Kavitäten zu finden. Die Q-E-Messung nach der Abkühlung und vor der Behandlung wird auch zeigen, ob der Leistungsgewinn durch ein Aufwärmen auf Raumtemperatur beeinträchtigt wird.
Die in dieser Arbeit skizzierte Behandlungssequenz wird für CH-Kavitäten dringend empfohlen. Das Ausheizen hat sich bei der Verringerung des Multipactings and der Güteabnahme bei hohen Feldern als wirksam erwiesen und bleibt von der anschließenden HPR unbeeinflusst. In dieser Arbeit wurden keine negativen Auswirkungen der HPR auf das Multipactingverhalten festgestellt. Anschließend wird eine CW-HF-Konditionierung durchgeführt, bis keine weitere Leistungszunahme der Kavität mehr zu verzeichnen ist.
Wenn die Kavität immer noch durch Feldemission begrenzt ist, sollte eine Wiederholung der HPR-Behandlung in Betracht gezogen werden, da bei sorgfältiger Durchführung der HPR keine der bisherig gefertigten CH-Kavitäten hierdurch begrenzt war. Es ist auch anzumerken, dass die Heliumbehandlung nur an der 360MHz CH-Kavität durchgeführt wurde, als diese eine geringe Strahlung durch Feldemission aufwies. Das Risiko des Heliumprocessing an CH-Kavitäten unter starker Feldemission ist unbekannt. Es ist zu erwarten, dass die Elektronenströme und damit die Ionenbeschusslawinen zunehmen und ein größeres Risiko für die Beschädigung von der Komponenten darstellen. Nach dem derzeitigen Kenntnisstand sollte die Heliumbehandlung nur für gut vorbereitete Kavitäten mit minimaler Feldemission in Betracht gezogen werden.
Die Arbeit behandelt die Messung von Photonen mit Teilchendetektoren, die auf digitalen Silizium-Pixelsensoren basieren. Diskutiert werden zwei wesentliche Schritte in den Upgrade-Programmen des ALICE-Experiments am CERN-LHC:
1. FOCAL-Detektor-Upgrade (2027): Untersuchung der Detektorantwort des elektromagnetischen Pixel-Kalorimeters EPICAL-2 und der Form elektromagnetischer Schauer durch Teststrahl-Messungen und Monte Carlo Simulationen.
2. ALICE 3-Upgrade (2035): Simulationsstudien zum Untergrund in der Messung von Photonen mit sehr kleinem Transversalimpuls.
Teil 1: Performance des elektromagnetischen Pixel-Kalorimeters EPICAL-2
Detektordesign und Testmessungen: EPICAL-2, ein SiW-Sandwich-Design-Kalorimeter mit ALPIDE Sensoren, besitzt eine Tiefe von ca. 20 Strahlungslängen und etwa 25 Millionen Pixel. Testmessungen wurden an der Universität Utrecht (kosmische Myonen) sowie am DESY und CERN-SPS (Elektronen) durchgeführt.
Simulation und Validierung: Das EPICAL-2 wird im Simulationspaket Allpix2 implementiert, um die Testmessungen zu validieren und das Detektorverhalten zu untersuchen. Systematische Variationen bestätigen die Stabilität und Reproduzierbarkeit der Simulation.
Datenaufbereitung und Schauerprofile: Im Rahmen der Datenanalyse werden fehlerhafte Pixel ausgeschlossen, Pixel-Treffer zu Clustern gruppiert, Chips kalibriert und der Strahlwinkel korrigiert. Das longitudinale Profil elektromagnetischer Schauer zeigt, dass das Schauermaximum in der Simulation etwas tiefer liegt als in den Testdaten, was auf zusätzliches Material oder eine unvollständige Beschreibung der Schauerentwicklung in der Simulation zurückzuführen sein könnte. Das laterale Profil zeigt, dass eine Schauertrennung im Millimeter-Bereich möglich ist.
Energieantwort und -auflösung: Die nicht-lineare Energieantwort wird sowohl in Testdaten als auch in Simulationen beobachtet. Die Energieauflösung des EPICAL-2 für Cluster ist besser als für Pixeltreffer und vergleichbar mit dem analogen CALICE-Prototypen. Simulationen ohne Strahlenergie-Fluktuationen zeigen eine bessere Energieauflösung als in den Testdaten.
Teil 2: Untergrund in der Messung von Photonen in ALICE 3
Simulationssetup: Die ALICE 3-Detektorgeometrie wird in GEANT4 implementiert, um den Untergrund in der Messung weicher Photonen zu untersuchen. Simulationen mit PYTHIA und GEANT4 zeigen, dass der Untergrund hauptsächlich aus Zerfallsphotonen und Photonen aus externer Bremsstrahlung besteht.
Ergebnisse der Untergrundstudien: Der Untergrund durch Photonen aus externer Bremsstrahlung dominiert und liegt im Akzeptanzbereich des FCT um einen Faktor von 5 bis 10 über dem theoretischen Signal weicher Photonen. In der Simulation wird das Material zu 8%—14% X0 in ALICE 3 bestimmt, wobei bereits bei 5% X0 der Untergrund genauso stark ist wie das erwartete Signal.
Möglichkeiten zur Untergrundreduzierung: Untersuchungen zeigen, dass ein Elektron-Veto das Signal-zu-Untergrund-Verhältnis um den Faktor 30 verbessern und eine Materialreduktion durch ein optimiertes Strahlrohr um den Faktor 7.
Die Ergebnisse des ersten Teils dieser Arbeit demonstrieren insgesamt die gute Performance des EPICAL-2 in Bezug auf die Energiemessung und die Bestimmung der Schauerform. Darüber hinaus unterstützen sie den Einsatz digitaler Kalorimeter im FOCAL-Upgrade des ALICE-Experiments und zeigen das Potenzial der digitalen Kalorimetertechnologie für zukünftige Hochenergiephysik-Experimente.
Die Ergebnisse des zweiten Teils dieser Arbeit liefern einen wesentliche Beitrag zum geplanten ALICE 3-Upgrade. Weiterhin veranschaulichen sie, wie ein Elektron-Veto und die Reduzierung des Materials zusammen eine vielversprechende Messstrategie bilden können.
The main focus of this thesis is the application of the nonperturbative Functional Renormalization Group (FRG) to the study of low-energies effective models for Quantum Chromodynamics (QCD). The study of effective field theories and models is crucial for our understanding of physics, especially when we deal with fundamental interaction theories like QCD. In particular, the ultimate goal is the understanding of the critical properties of these models in such a way that we can have an insight on the actual critical phenomena of QCD, with a special focus on its chiral phase transition. The choice of the FRG method derives from the fact that it belongs to the class of functional non-perturbative methods and has also the advantage of linking physics at different energy scales. These features make FRG perfectly compatible with the task of studying non-perturbative phenomena and in particular phase transitions, like the ones expected for strongly interacting matter. However, the functional nature of the FRG approach and of the Wetterich equation has a consequence that its exact resolution is hardly possible, and an ansatz for the effective action is generally needed. In this work we choose to adopt the local-potential approximation (LPA), which prescribes to stop at zeroth order in the expansion in derivative operators of the quantum effective action, including only the quantum effective potential. In this work we exploited the key observation that the FRG flow equation can be cast, for specific models and truncation schemes, in the form of an advection-diffusion, possibly with a source term. This type of equation belongs to the class of problems faced in the context of viscous hydrodynamics. Therefore, an innovative approach to the solution of the FRG flow equation consists in the choice of a method developed specifically for the resolution of this class of hydrodynamic equations. In particular, the Kurganov-Tadmor finite-volume scheme is adopted. Throughout this work we apply this scheme to the study of different physical systems, showing the reliability and the flexibility of this approach.
In the first part of the thesis, we discuss the well-known O(N) model, using the hydrodynamic formulation to solve the FRG flow equation in the LPA truncation. We focus on the study of the critical behaviour of the system and calculate the corresponding critical exponents. Particular attention is given to the error estimation in the extraction of critical exponents, which is a needed and not widely explored aspect. The results are well compatible with others in the literature, obtained with different perturbative and nonperturbative methods, which validates the procedure. In the second part of the thesis, we introduce the quark-meson model as a low-energy effective model for QCD, with a specific focus on its chiral symmetry-breaking pattern and the subsequent dynamical quark-mass generation. The LPA flow equation is of the advection-diffusion type, with an extra source contribution which is due to the inclusion of fermionic degrees of freedom. We thus adopt the developed numerical techniques to derive the phase diagram of the model, which is in agreement with the one obtained with other techniques in the literature.
We also follow another possible way for the study of the critical properties of the quark-meson model: the so-called thermodynamic geometry. This approach is based on the interpretation of the parameter space of the system as a differential manifold. One can then obtain relevant information about the phase transitions from the Ricci scalar. We studied the chiral crossover investigating the behavior of the Ricci scalar up to the critical point, featuring a peaking behavior in the presence of the crossover. We then repeated this analysis in the chiral limit, where the phase transition is expected to be of second order. Via this geometric technique it is possible to have a different view on the chiral phase transition of QCD. This is the case since this approach is based on the calculation of quantities which are influenced by higher-order momenta of the thermodynamic potential, thus allowing for a more comprehensive analysis of the phase transition.
Finally, we exploit the numerical advancement to face the issue of the regulator choice in the FRG calculations. This is one of the most delicate issues which arise when using approximations to solve the FRG flow equation and deserves extensive investigation. In particular, we performed a vacuum parameter study and used the RG consistency requirement to determine the impact of the choice of the regulator on the physical observables and on the phase diagram of the model. Via this study we develop a systematic method to comparison the results obtained via different regulators. We show the importance of the choice of an appropriate UV cutoff in the determination of UV-independent IR observables and, consequently, the impact on the latter that the truncation of the effective average action and the choice of the regulator have.
This thesis is concerned with the investigation of static and dynamic properties of quantum Heisenberg paramagnets in the absence of a magnetic field and therefore for vanishing magnetization. For this purpose a new formulation of the spin functional renormalization group (SFRG) is employed. The first manifestations of the SFRG were developed by Krieg and Kopietz, motivated by the FRG approach to ordinary field theories and the older works of Vaks, Larkin and Pikin on diagrammatic methods for spin operators.
The main idea is to study quantum spin systems by considering the evolution of correlation functions under a continuous deformation of the interaction between magnetic moments, starting from a solvable limit. This leads to nonperturbative results for quantities like the spin-spin correlation function. After a basic introduction to the phenomena and concomitant problems discussed in this thesis, a detailed description of the SFRG method in its initial formulation is given in the second chapter. We start with the generating functional of connected imaginary-time spin-correlation functions GΛ [h], for which an exact flow equation is derived. A particular issue, already pointed out by Krieg and Kopietz, arises here, namely the singular non-interacting limit of its subtracted Legendre transform ΓΛ [m]. As a consequence the initial condition of that functional does not have a proper series expansion in powers of m. This prevents us from working directly within a pure one-particle irreducible (1-PI) parametrization of the correlation functions, as is often done in the context of field theories. Thus motivated, we develop a workaround explicitly tailored to paramagnets, which provides us with a functional that has a well-behaved Legendre transform. The new approach is based on a different treatment of fluctuations at zero and finite frequencies, analogous to a previous hybrid formulation for the symmetry-broken phase. Certain properties, considered to be highly relevant for isotropic paramagnets, as well as previous observations, already made in the study of simpler spin systems like the Ising model, serve as additional justifications for choosing this construction.
In the third chapter our new method is assessed by calculating the dynamic susceptibility G(k, iω) and thus the dynamic structure factor S(k, ω) in the symmetric phase. For this purpose an approximate integral equation for the dynamic polarization function Π̃(k, iω) was derived. This equation results from a truncation of the hierarchy of flow equations and contains static quantities, that are assumed to be known from another source. Our first application is the high-temperature limit T → ∞ in d ≤ 3 dimensions. Salient features, believed to be part of the spin dynamics in isotropic Heisenberg magnets are also exhibited by our solution, like (anomalous) diffusion in a suitable hydrodynamic limit. Moreover we obtain the same order of magnitude for the diffusion coefficient D as in experiments and other theoretical calculations. Other aspects do not entirely agree with previous approaches.
Afterwards we continue by investigating systems close to the critical point Tc. Dynamic scaling forms for Π̃(k, iω) and S(k, ω), which, like spin diffusion, are postulated on the basis of quite general physical arguments, are reproduced. Agreement of the line-shapes 2with neutron scattering experiments at T = Tc is found to be satisfying, with deviations for ω → 0, that may be attributed to the simplicity of the approximation, like at infinite temperature.
Finally, we focus our attention on the thermodynamic properties of isotropic Heisenberg paramagnets by calculating the static susceptibility G(k). For this purpose we employ simple truncation schemes of the flow equations for the static self-energy ΣΛ (k) and four-spin vertex ΓΛ , together with a basic ansatz for the dynamic polarization Π̃(k, iω) in quantum systems. As a result we obtain transition temperatures Tc of three-dimensional nonfrustrated magnets within an accuracy of 5 percent compared to established benchmark values from Quantum Monte Carlo and high temperature expansion series. We conclude this chapter by giving an outlook on the application of our method to frustrated systems, which may require a combined non-trivial calculation of static and dynamic properties.
Das radioaktive Edelgas Radon und seine ebenfalls radioaktiven Zerfallsprodukte machen den größten Teil der natürlichen Strahlenbelastung in Deutschland aus. Trotz der Einstufung als krebserregend für Lungenkrebs wird es zur Therapie entzündlicher Krankheiten eingesetzt. Der hauptsächliche Aufnahmemechanismus ist dabei die Inkorporation über die Atmung, wobei Radon auch über die Haut aufgenommen werden kann. Radon wird dabei über das Blut im gesamten Körper verteilt und kann in Gewebe mit hoher Radonlöslichkeit akkumulieren. Die Zerfallsprodukte verbleiben jedoch in der Lunge, zerfallen dort, bevor sie abtransportiert werden können und schädigen das dortige Gewebe.
Die Lungendosis wird laut Simulationen zum größten Teil durch die kleinsten Radon-Zerfallsprodukte (< 10 nm) bestimmt, die besonders effektiv im Respirationstrakt anheften. Die erzeugte Dosis ist dabei aufgrund der inhomogenen Anlagerung der Zerfallsprodukte lokal stark variabel. In Simulationen wurden Bifurkationen als Ort besonders hoher Deposition identifiziert, wobei die experimentelle Datenlage zur Deposition kleinster Radon-Zerfallsprodukte eingeschränkt ist. Aufgrund des Anstiegs der Komplexität von Simulationen oder Experimenten wird in den meisten Betrachtungen nicht der oszillatorische Atemzyklus berücksichtigt, sondern lediglich ein einseitig gerichteter Luftstrom betrachtet. Im Rahmen dieser Arbeit wurde ein experimentelles Modell entwickelt und etabliert, das die Messung der Deposition von Radon-Zerfallsprodukten ermöglicht und zwischen drei Größenfraktionen (Freie Zerfallsprodukte: < 10 nm, Cluster: 20-100 nm, Angelagerte Zerfallsprodukte: > 100 nm) unterscheiden kann. Der Luftfluss durch das Modell bildet sowohl die Inhalation als auch die Exhalation ab. Erste Experimente mit dem neu entwickelten Messaufbau konnten die aus Simulationen bekannte erhöhte Deposition der freien Zerfallsprodukte in einer Bifurkation abbilden. Die Vergrößerung des Bifurkationswinkels von 70° auf 180° zeigte lediglich einen minimalen Anstieg in der Größenordnung des Messfehlers. Der dominierende Prozess der Anlagerung der freien Zerfallsprodukte ist die Brown'sche Molekularbewegung, die unabhängig vom Bifurkationswinkel ist. Dennoch kann ein veränderter Winkel die Luftströmung und entstehende Turbulenzen verändern, wodurch die Deposition beeinflusst werden kann. Dies lässt sich jedoch mit dem hier benutzten Messaufbau nicht auflösen. Entgegen der Beobachtungen in der Literatur führte die Erhöhung der Atemfrequenz von 12 auf 30 Atemzüge pro Minute, in den im Rahmen dieser Arbeit durchgeführten Experimenten, zu keiner messbaren Veränderung der Deposition. Diese Beobachtung ist auf die Entstehung gegensätzlicher Effekte zurückzuführen. Einerseits führt eine schnellere Luftströmung zu kürzeren Aufenthaltszeiten der freien Zerfallsprodukte im Modell, wodurch die Deposition unwahrscheinlicher wird. Andererseits entstehen vermehrt sekundäre Strömungen und absolut betrachtet werden mehr Partikel durch das Modell gepumpt. Es ist davon auszugehen, dass sich diese Effekte im hier getesteten Bereich aufheben.
Als potentielle Schutzmaßnahme zur Reduktion der Lungendosis konnte im Rahmen dieser Arbeit die Filtereffzienz von Gesichtsmasken (OP-Masken, FFP2 Masken) gegenüber Radon und seinen Zerfallsprodukten bestimmt werden. Während Radon nicht gefiltert wird, wurden die freien Zerfallsprodukte fast vollständig (> 98%) und die Cluster zum größten Teil (≈ 80 %) zurückgehalten.
Radon selbst kann im gesamten Organismus verteilt werden und dort in Gewebe akkumulieren. Zur Bestimmung der Dosis wird dabei auf biokinetische Modelle zurückgegriffen. Diese sind von der Qualität ihrer Eingabeparameter abhängig, wobei beispielsweise die Werte zur Verteilung von Radon zwischen Blut und Gewebe auf experimentell gewonnenen Löslichkeitswerten aus Mäusen und Ratten beruhen. Unbekannte Werte werden von der Internationalen Strahlenschutzkommission basierend auf der Gewebezusammensetzung als gewichteter Mittelwert berechnet. In dieser Arbeit wurde die Löslichkeit in humanen Blutproben und wässrigen Lösungen verschiedener Konzentrationen der Blutproteine Hämoglobin und Albumin bestimmt. Es löste sich mehr Radon in Plasma als in Erythrozytenkonzentrat und Vollblut. Die Protein-Lösungen zeigten keine Konzentrationsabhängigkeit der Löslichkeit, sondern lediglich in hitzedenaturiertem Hämoglobin wurde eine niedrigere Löslichkeit gemessen. Basierend auf diesen Beobachtungen, sollte die These überprüft werden, ob sich die Löslichkeit einer Mischung als gewichteter Mittelwert der einzelnen Löslichkeiten berechnen lässt. Daher wurden diese in einer Mischung aus zwei Flüssigkeiten (1-Pentanol, Ölsäure) bestimmt. Die experimentell bestimmte Löslichkeit war dabei fast doppelt so groß wie der berechnete Wert. Dieser Unterschied kann dadurch zustande kommen, dass bei einer Berechnung basierend auf der Zusammensetzung die Wechselwirkungen zwischen den Lösungsmitteln vernachlässigt werden. Dies verdeutlicht die Notwendigkeit experimenteller Daten zur Verteilung und Lösung von Radon in verschiedenem Gewebe.
Efficient modeling and mitigation of quadrupole errors in synchrotrons and their beam transfer lines
(2023)
This thesis investigates the problem of estimating quadrupole errors on synchrotrons as well as how to minimize the influence of quadrupole errors for beam transfer lines (beamlines). It emphasizes the importance to treat possible error sources in all parts of an accelerator in order to provide constantly high beam quality to the experimental stations. While the presented methods have been investigated by using the example of the SIS18 synchrotron and the HEST beamlines at GSI Helmholtz Centre for Heavy Ion Research, they are equally relevant for the future synchrotrons and beamlines of the Facility for Antiproton and Ion Research in Europe (FAIR).
Part 1 discusses the problem of estimating quadrupole errors via orbit response measurements at synchrotrons. An emphasis is put on investigating the influence of the availability of steerer magnets and beam position monitors (BPMs) on the solvability of the inverse problem as well as on the propagation of measurement uncertainty for the estimation of quadrupole errors. The problem is approached via analytical considerations as well as via dedicated simulation studies. By developing an analytical expression for the Jacobian matrix, the theoretical boundaries for the solvability of the inverse problem are derived. Moreover, it is shown that the analytical expressions for the Jacobian matrix can be used during the fitting procedure to achieve a significant improvement in the computational efficiency by a factor $N_{steerers} \times N_{quadrupoles}$, where $N$ denotes the number of lattice elements of the respective type. The presented results are tested via dedicated measurements at the SIS18 synchrotron.
Part 2 discusses – complementary to part 1 – the influence of quadrupole errors in beam transfer lines with respect to the beam quality requirements given by the experimental stations. A preventive approach is presented which allows to minimize the influence of possible quadrupole errors on the degradation of beam quality. By identifying and selecting robust quadrupole configurations, a stable operation of the beamline can be enabled and the time needed by operators to readjust the beamline parameters can be reduced. The concept of beamline robustness is developed and is studied with the help of dedicated simulations. The simulation results are used to identify certain properties that distinguish robust from nonrobust quadrupole configurations. Also, various methods for improving the computational process of identifying robust quadrupole configurations are presented. The methods and results are tested via dedicated measurements at two different beamlines at GSI Helmholtz Centre for Heavy Ion Research and at Forschungszentrum Jülich.
The theoretical and experimental investigation of exotic hadrons like tetraquarks is an important branch of modern elementary particle physics. In this thesis I investigate different four-quark systems using lattice QCD and search for evidence of stable tetraquark states or resonances.
Lattice QCD as a non-perturbative approach to QCD allows an accurate and reliable determination of the masses of strongly bound hadrons.
However, most tetraquarks appear as weakly bound states or resonances, which makes a theoretical investigation using lattice QCD difficult due to the finite spatial volume. A rigorous treatment of such systems is feasible using the so-called Lüscher method. This allows to calculate the scattering amplitude based on the finite-volume energy spectrum determined in a lattice QCD calculation. Similarly to the analysis of experimental data, this scattering amplitude can be used to determine the binding energies of bound states or the masses and decay widths of resonances in the infinite volume.
In my work I calculate the low-energy energy spectra of different four-quark systems and use - if necessary - the Lüscher method to determine the masses of potential tetraquark states.
I focus on systems consisting of two heavy antiquarks and two light quarks, where at least one of the heavy antiquarks is a bottom quark.
Even though such tetraquarks have not yet been experimentally detected, they are considered promising candidates for particles that are stable with respect to the strong interaction.
A decisive step for successfully calculating low-lying energy levels for such four-quark systems is a carefully chosen set of creation operators, which represent the physical states most accurately. In addition to operators that generate a local structure where all four quarks are located at the same space-time point, I also use so-called scattering operators that resemble two spatially separated mesons. These scattering operators turned out to be relevant for successfully determining the lowest energy levels and are therefore essential, especially if a Lüscher analysis is carried out.
In my work, I considered two different lattice setups to study the four-quark systems $\bar{b}\bar{b}ud$ with $I(J^P)=0(1^+) $, $\bar{b}\bar{b}us$ with $J^P=1^+ $ and $\bar{b}\bar{c}ud$ with $I(J^P)=0(0^+) $ and $I(J^P)=0(1^+) $ and to predict potential tetraquark states. In both setups, I considered scattering operators. While in the first setup I used them only as annihilation operators, in the second setup they were included both as creation and annihilation operators. Additionally, in the second lattice setup, I performed a simplified investigation of the $\bar{b}\bar{b}ud$ system with $I(J^P)=0(1^-) $, which is a potential candidate for a tetraquark resonance. The results of the investigation of the mentioned four-quark systems can be summarized as follows:
For the $ \bar{b}\bar{b}ud $ four-quark system with $ I(J^P)=0(1^+) $ I found a deeply bound ground state slightly more than $ 100\,\textrm{MeV} $ below the lowest meson-meson threshold. The existence of a corresponding $\bar{b}\bar{b}ud$ tetraquark in the infinite volume was confirmed using a Lüscher analysis and possible systematic errors due to the use of lattice QCD were taken into account.
Similar results were obtained for the $ \bar{b}\bar{b}us $ four-quark system with $ J^P=1^+ $. Again, I found a ground state well below the lowest meson-meson threshold, but slightly weaker bound than for the $ \bar{b}\bar{b}ud $ system. Effects due to the finite volume turned out to be negligible for this system, as already predicted for the $ \bar{b}\bar{b}ud $ system. \item For the $ \bar{b}\bar{c}ud $ four-quark systems with $ (J^P)=0(0^+) $ and $ (J^P)=0(1^+) $ I was able to rule out the existence of a deeply bound tetraquark states based on the energy spectrum in the finite volume. However, by means of a scattering analysis using the Lüscher method, I found evidence a broad resonance for both channels.
In the case of the $ \bar{b}\bar{b}ud $ four-quark system with $ I(J^P)=0(1^-) $, I could neither confirm the existence of a resonance, nor rule out its existence with certainty.
In particular, my investigations showed that the results of the two different lattice simulations are consistent. The theoretical prediction of the bound tetraquark states $\bar{b}\bar{b}ud$ and $\bar{b}\bar{b}us$ as well as the tetraquark resonances in the $\bar{b}\bar{c}ud$ system in this work represent an important contribution to the future experimental search for exotic hadrons and can support the discovery of previously unobserved particles.
ATP-binding cassette (ABC) transporters shuttle diverse substrates across biological membranes. They play a role in many physiological processes but are also the reason for antibiotic resistance of microbes and multi drug resistance in cancer, and their dysfunction can lead to serious diseases. Transport is achieved through an ATP-driven closure of the two nucleotide binding sites (NBSs) which induces a transition between an inward-facing (IF) and an outward-facing (OF) conformation of the connected transmembrane domains (TMDs). In contrast to this forward transition, the reverse transition (OF-to-IF) that involves Mg2+-dependent ATP hydrolysis and release is less understood. This is particularly relevant for heterodimeric ABC transporters with asymmetric NBSs. These transporters possess an ATPase active consensus NBS (c-NBS) and a degenerate NBS (d-NBS) with little or no ATPase activity.
Crucial details regarding function and mechanism of the transport cycle remain elusive.
Here, these open questions were addressed using pulse electron-electron double resonance (PELDOR or DEER) spectroscopy of the heterodimeric ABC exporter TmrAB.
To better understand the transport cycle, the underlying kinetics of the conformational transitions need to be elucidated. By introducing paramagnetic nitroxide (NO) spin probes at key positions of TmrAB and employing time-resolved PELDOR spectroscopy, the forward transition could be followed over time and the rate constants for the conformational transition at the TMDs and NBSs were characterized.
The temperature dependence of these rate constants was further analyzed to determine for the first time the activation energy of conformational changes in a large membrane protein. For TMD opening and c-NBS dimerization, values of 75 ± 27 kJ/mol and 56 ± 3 kJ/mol, respectively were found. These values agree with reported activation energies of peptide transport and peptide dissociation in other ABC transporters, suggesting that the forward transition may be the rate-limiting step for substrate translocation.
The functional relevance of asymmetric NBSs is so far not well understood. By combining Mg2+-to-Mn2+ substitution with Mn2+-NO and NO-NO PELDOR spectroscopy, the binding of ATP-Mn2+, the conformation of the NBSs, and the conformation of the TMDs could be simultaneously monitored for the first time. These results reveal an asymmetric post-hydrolytic state. Time-resolved investigation showed that ATP hydrolysis at the active c-NBS triggers the reverse transition, whereas opening of the impaired d-NBS regulates the return to the IF conformation.
Das Heidelberger Ionenstrahl-Therapiezentrum (HIT) stellt Protonen-, Helium- und Kohlenstoff-Ionenstrahlen unterschiedlicher Energie und Intensität für die Krebsbehandlung und Sauerstoff-Ionenstrahlen für Experimente zur Verfügung. Der hierfür verwendete Beschleuniger ist darüber hinaus in der Lage auch Ionenstrahlintensitäten unterhalb der für Therapien verwendeten bereitzustellen. Allerdings ist das derzeit installierte Strahldiagnosesystems nicht in der Lage, das Strahlprofil bei solchen geringen Intensitäten (< 10^5 Ionen/s) zu messen. Dabei existieren mögliche medizinische Anwendung für diese niederintensiven Ionen-strahlen, wie beispielsweise eine neuartige und potentiell klinisch vorteilhafte Bildgebung: die Ionenradiographie. Eine essentielle Voraussetzung für diese und andere Anwendungen ist ein System zur Überwachung von Ionenstrahlen niedriger Intensität. Ein solches System wurde im Rahmen dieser Arbeit konzipiert, realisiert, getestet und optimiert.
Das Funktionsprinzip basiert auf szintillierenden Fasern, insbesondere solchen mit erhöhter Strahlungshärte für die Möglichkeit einer dauerhaften Platzierung im Therapiestrahl. Ein diese Fasern durchlaufendes Ion regt den darin enthaltenen Szintillator durch Stoßprozesse kurzzeitig an. Die dabei deponierte Energie wird anschließend in Form von Photonen wieder emittiert. Silizium-Photomultiplier sind an den Enden der Fasern montiert und wandeln die Photonensignale in verstärkte elektrische Impulse um. Diese Impulse werden von einer neuartigen und dedizierten Ausleseelektronik aufgezeichnet und verarbeitet. Ein Prototypaufbau, bestehend aus den genannten Teilen, wurde im Strahl getestet und kann das transversale Strahlprofil erfolgreich im Intensitätsbereich von 10^7 Ionen/s bis hinunter zu 10^2 Ionen/s aufzeichnen. Darüber hinaus konnte, durch die erfolgreiche Ankunftszeitmessung von einzelnen Ionen bis zu Intensitäten von 5*10^4 Ionen/s, ein Machbarkeitsnachweis für die Messung der Spur von einzelnen Teilchen erbracht werden.
The strong force is one of the four fundamental interactions, and the theory of it is called Quantum Chromodynamics (QCD). A many-body system of strongly interacting particles (QCD matter) can exist in different phases depending on temperature (T) and baryonic chemical potential (µB). The phases and transitions between them can be visualized as µB−T phase diagram. Extraction of the properties of the QCD matter, such as compressibility, viscosity and various susceptibilities, and its Equation of State (EoS) is an important aspect of the QCD matter study. In the region of near-zero baryonic chemical potential and low temperatures the QCD matter degrees of freedom are hadrons, in which quarks and gluons are confined, while at higher temperatures partonic (quarks and gluons) degrees of freedom dominate. This partonic (deconfined) state is called quark-gluon plasma (QGP) and is intensively studied at CERN and BNL. According to lattice QCD calculations at µB=0 the transition to QGP is smooth (cross-over) and takes place at T≈156 MeV. The region of the QCD phase diagram, where matter is compressed to densities of a few times normal nuclear density (µB of several hundreds MeV), is not accessible for the current lattice QCD calculations, and is a subject of intensive research. Some phenomenological models predict a first order phase transition between hadronic and partonic phases in the region of T≲100 MeV and µB≳500 MeV. Search for signs of a possible phase transition and a critical point or clarifying whether the smooth cross-over is continuing in this region are the main goals of the near future explorations of the QCD phase diagram.
In the laboratory a scan of the QCD phase diagram can be performed via heavy-ion collisions. The region of the QCD phase diagram at T≳150 MeV and µB≈0 is accessible in collisions at LHC energies (√sNN of several TeV), while the region of T≲100 MeV and µB≳500 MeV can be studied with collisions at √sNN of a few GeV. The QCD matter created in the overlap region of colliding nuclei (fireball) is rapidly expanding during the collision evolution. In the fireball there are strong temperature and pressure gradients, extreme electromagnetic fields and an exchange of angular momentum and spin between the system constituents. These effects result in various collective phenomena. Pressure gradients and the scattering of particles, together with the initial spatial anisotropy of the density distribution in the fireball, form an anisotropic flow - a momentum (azimuthal) anisotropy in the emission of produced particles. The correlation of particle spin with the angular momentum of colliding nuclei leads to a global polarization of particles. A strong initial magnetic field in the fireball results in a charge dependence and particle-antiparticle difference of flow and polarization.
Anisotropic flow is quantified by the coefficients vₙ from a Fourier decomposition of the azimuthal angle distribution of emitted particles relative to the reaction plane spanned by beam axis and impact parameter direction. The first harmonic coefficient v₁ quantifies the directed flow - preferential particle emission either along or opposite to the impact parameter direction. The v₁ is driven by pressure gradients in the fireball and thus probes the compressibility of the QCD matter. The change of the sign of v₁ at √sNN of several GeV is attributed to a softening of the EoS during the expansion, and thus can be an evidence of the first order phase transition. The global polarization coefficient PH is an average value of the hyperon’s spin projection on the direction of the angular momentum of the colliding system. It probes the dynamics of the QCD matter, such as vorticity, and can shed light on the mechanism of orbital momentum transfer into the spin of produced particles.
In collisions at √sNN of several GeV, which probe the region of the QCD phase diagram at T≲100 MeV and µB≳500 MeV, hadron production is dominated by u and d quarks. Hadrons with strange quarks are produced near the threshold, what makes their yields and dynamics sensitive to the density of the fireball. Thus measurement of flow and polarization, in particular of (multi-)strange particles, provides experimental constraints on the EoS, that allows to extract transport coefficients of the QCD matter from comparison of data with theoretical model calculations of heavy-ion collisions.
For continuation of the annotation see the PDF of thesis
Im Rahmen dieser Doktorarbeit werden drei Schwerpunkte behandelt: 1) Die hocheffektive Beschleunigung von Elektronen und Protonen durch die Wechselwirkung von relativistischen Laserpulsen mit Schäumen. 2) Die Erzeugung und Messung hochintensiver Betatronstrahlung von direkt laserbeschleunigten (DLA-) Elektronen. 3) Die Anwendung von DLA-Elektronen für den biologischen FLASH-Effekt mit einer rekordbrechenden Dosisrate.
Die direkte Laserbeschleunigung von Elektronen wurde durch die Wechselwirkung eines sub-ps-Laserpulses mit einer Intensität von ~ 10^19 W/cm^2 mit einem Plasma nahe kritischer Elektronendichte (NCD) untersucht. Ein sub-mm langes NCD-Plasma wurde durch Erhitzen eines Schaums mit einer niedrigen Dichte mit einem ns-Puls von 10^13-10^14 W/cm^2 erzeugt. Die Experimente wurden an der PHELIX-Anlage (Petawatt Hoch- Energie Laser für Schwerionenexperimente) in den Jahren 2019 – 2023 durchgeführt. Während der Suche nach optimalen Bedingungen für die Beschleunigung von Elektronen und Protonen wurden die Parameter des ns-Pulses variiert und verschiedene Targets verwendet. Es wurde gezeigt, dass das Plasma im Schaum gute Voraussetzungen für die Erzeugung gerichteter, ultrarelativistischer DLA-Elektronen mit Energien von bis zu 100 MeV bietet. Die Elektronen weisen eine Boltzmann-ähnliche Energieverteilung mit einer Temperatur von 10-20 MeV auf.
Optimale Bedingungen für eine effektive Beschleunigung von DLA-Elektronen wurden bei der Kombination eines CHO-Schaums mit einer Dichte von 2 mg/cm3 und einer Dicke von 300-500 µm mit einer Metallfolie erreicht. Die Gesamtladung der detektierten Elektronen mit Energien über 1,5 MeV erreichte 0,5-1 µC mit der Umwandlungseffizienz der Laserenergie von ~ 20-30%.
Außerdem wird die Beschleunigung von Protonen durch DLA-Elektronen anders verursacht als bei typischer Target Normal Sheath Acceleration (TNSA). Für die Untersuchung der lokalen Protonenenergieverteilung wurden Magnetspektrometer unter verschiedenen Winkeln zur Laserachse verwendet. Dafür wurde eine Filtermethode entwickelt, welche es ermöglicht, Spektren von Protonen mit Energien von bis zu 100 MeV zu rekonstruieren. Es wurde gezeigt, dass am PHELIX durch die Kombination von einem ~ 300-400 µm dicken CHO-Schaum mit einer Dichte von 2 mg/cm^3 und einer 10 µm dicken Au-Folie bei einer Intensität des sub-ps-Pulses von ~ 10^19 W/cm^2 und unter Verwendung eines optimierten ns-Vorpulses eine optimale Protonenbeschleunigung erreicht wurde. Es wurde ein TNSA-ähnliches Regime mit einer maximalen Cut-off-Energie von 34±0,5 MeV beobachtet. Im Vergleich dazu wurde bei der typischen TNSA unter Verwendung einer 10 µm dicken Au-Folie als Target und derselben Laserintensität eine maximale Cut-off-Energie von 24±0,5 MeV gemessen. Darüber hinaus beobachteten wir einen sehr schwachen Abfall der Protonenanzahl in Abhängigkeit von der Protonenenergie (anders als bei der typischen TNSA) und eine sehr regelmäßige Protonenstrahlverteilung in einem breiten Winkelbereich bis zu hohen Energien. Dies könnte zur Verbesserung der Qualität der Protonenradiographie von Plasmafeldern genutzt werden.
Beim DLA-Prozess (im NCD-Plasma) entsteht Betatronstrahlung durch die Oszillationen von Elektronen in quasi-statischen elektrischen und magnetischen Feldern des Plasmakanals. Um diese Strahlung zu untersuchen, wurde ein neues modifiziertes Magnetspektrometer (X-MS) konstruiert. Das X-MS ermöglicht die 1D-Auflösung mehrerer Quellen. Dank dieser Spezifikation war es möglich, Betatronstrahlung von Bremsstrahlung der ponderomotorischen Elektronen im Metallhalter zu trennen und zu messen.
Im Experiment mit einem CHO-Schaum mit einer Dichte von 2 mg/cm^3 und einer Dicke von ~ 800 µm als Target wurde die von den optimierten DLA-Elektronen erzeugte Betatronstrahlung gemessen. Bei einer Peak-Intensität des dreieckigen ns-Pulses von ~ 3·10^13 W/cm^2 und des sub-ps-Pulses von ~ 10^19 W/cm^2, welcher 4±0,5 ns gegenüber dem ns-Puls verzögert war, betrug der Halbwinkel im FWHM-Bereich des Elektronenstrahls 17±2°. Unter diesen Bedingungen war die Betatronstrahlung mit einem Halbwinkel im FWHM-Bereich von 11±2° für die Photonen mit Energien über 10 keV ebenfalls gerichtet. Die Photonenanzahl mit Energien über 10 keV wurde auf etwa 3·10^10 / 3·10^11 (gerichtete Photonen / Photonen im Halbraum entlang der Laserstrahlrichtung) abgeschätzt. Die maximale Photonenanzahl pro Raumwinkel betrug ~2·10^11 photons/sr. Die Brillanz der registrierten Betatronstrahlung erreichte ~ 2·10^20 photons/s/mm^2/mrad^2/(0.1% BW) bei 10 keV.
Die Verwendung eines Hochstromstrahls aus DLA-Elektronen für die FLASH-Strahlentherapie ermöglicht das Erreichen einer Dosis von bis zu 50-70 Gy während eines sub-ps-Laserpulses. Im Jahr 2021, während der P213-Strahlzeit am PHELIX wurde der Sauerstoffkonzentrationsabfall bei der Bestrahlung von Medien (Wasser und andere biologische Medien) mit DLA-Elektronen in Abhängigkeit von der Dosis untersucht. Die Strahlendosis wurde hierbei indirekt gemessen. Hierfür wurde eine Rekonstruktionsmethode entwickelt, die es ermöglicht, die Dosis innerhalb des „Wasser-Containers“ auf Basis von Messungen außerhalb des Containers mit einem untersuchten Medium zu ermitteln. Es wurde eine gute Übereinstimmung zwischen dem Experiment und einer Monte-Carlo-Simulation für Wasser gezeigt. Die registrierte Dosisrate erreichte einen Rekordwert von ~ 70 TGy/s.
This thesis aims to investigate the properties of hadronic matter by analyzing fluctuations of conserved charges. A transport model (SMASH) is used for these studies to achieve this. The first part of this thesis focuses on examining transport coefficients, specifically the diffusion coefficients of conserved charges and the shear viscosity. The second part investigates equal-time correlations of particle numbers in the form of cumulants. The last chapter studies different aspects of the isobar collision systems Ru and Zr.
As a first step, the hadronic medium and interactions between its constituents are introduced, and simultaneously, their impact on transport coefficients is investigated. The methodology is verified by comparing the results of SMASH with Chapman-Enskog calculations, followed by examining 3-to-1 multi-particle reactions, revealing their influence on shear viscosity and electrical diffusion. The analysis of the full hadron gas considers angle-dependent cross-sections and additional elastic cross-sections via the AQM description, showing significant impacts on transport coefficients. The dependency on the number of degrees of freedom is explored, with noticeable effects on diffusion coefficients but a smaller influence on the shear viscosity. At non-zero baryon chemical potential, the diffusion coefficients are strongly influenced, while the shear viscosity remains unaffected. Overall, the study underscores the importance of individual cross-sections and the modeling of interactions on transport coefficients.
The following chapter explores fluctuations of conserved charges, crucial for understanding phase transitions in heavy-ion collision from the quark-gluon plasma to the hadronic phase. Using SMASH, the impact of global charge conservation on particle number cumulants in subvolumes of boxes simulating infinite matter is studied. Comparisons with simpler systems highlights the influence of hadronic interactions on cumulants, especially via charge annihilation processes and the results from SMASH shows agreement with analytical calculations. Calculations at finite baryon chemical potential reveals a transition from a Poisson to Skellam distribution within the net proton cumulants. It is shown that an unfolding procedure to obtain the net baryon fluctuations from the net proton ones deviates from the actual net baryon result, particularly in larger volumes. Finally, net proton correlations at vanishing baryon chemical potential align with ALICE measurements and the net proton cumulants are unaffected by deuteron formation.
In the next step, the goal is to investigate critical fluctuations in the hadronic medium. Therefore, the hadronic system is initialized with critical equilibrium fluctuations by coupling the hadron resonance gas with the 3D Ising model. The single-particle probability distributions are derived from the principle of maximum entropy. Evolving these distributions in SMASH, their development in an expanding sphere adjusted to experimental conditions can be analyzed. It reveals resonance decay and formations as the primary source that affects the particle cumulants. Because of isospin randomization processes, critical fluctuations are better preserved in net nucleon numbers. However, for the strongest coupling investigated in this work, correlations of the critical field are still present in the final state of the evolution in the net proton fluctuations. Examining cumulant dependence on rapidity windows shows a non-monotonic trend.
In the third part, collisions involving the isobars Ru and Zr are studied at a center-of-mass energy of 200 GeV. Initially, SMASH is used to study the initial conditions to hydrodynamical simulations, emphasizing the importance of the nuclear structure of isobars on the geometry of the collision area. It is found that the deformation parameters notably influence the initial state. Correlations between nucleon-nucleon pairs on eccentricity fluctuations yield no significant effect. Subsequently, the hydrodynamic model vHLLE evolves the previously explored initial conditions and for the transition between the hydrodynamic and kinetic descriptions, the Cooper-Frye formula is used. Usage of the canonical ensemble ensures the exact conservation of the conserved charges B, Q, and S. The neutron skin effect, which changes the charge distribution within Ru nuclei, is additionally considered. Fluctuations are assessed, revealing suppression in large rapidity windows due to global charge conservation. The hadronic phase modifies fluctuations of net pions, net kaons, and net protons via annihilation processes, yet fluctuations remain unaffected by the neutron skin effect.
Die künstliche elektrische Stimulation bietet oftmals die einzige Möglichkeit, nicht vorhandene bzw. verloren gegangene motorische sowie sensorische Aktivitäten in gewissem Umfang wieder herzustellen. Im Falle von tauben Patienten wird zur Erlangung von Hörempfindungen die elektrische Stimulation des peripheren auditorischen Systems mit Hilfe von Cochlea- oder Hirnstammimplantaten standardmäßig eingesetzt. Es ist dabei notwendig, natürliche neuronale Entladungsmuster durch die elektrisch evozierten Entladungsmuster nachzubilden. Bei einkanaligen Systemen kann nur die Zeitstruktur des Signals dargeboten werden. Mehrkanalige Systeme bieten hier noch zusätzlich die Möglichkeit auch örtlich selektiv bestimmte Nervenfasergruppen zu stimulieren und damit die Ortsstruktur in den Entladungsmustern zu repräsentieren. So hat es sich gezeigt, dass die Sprachverständlichkeit durch Verwendung von Mehrkanal-Elektroden verbessert werden kann. Grundvoraussetzung hierfür ist die Optimierung der Kanalseparation durch Kleinst-Vielkanalelektroden und der Wahl einer optimalen Codierstrategie des Signals.
Die Codierstrategie ist abhängig von dem jeweiligen spezifischen Einsatzbereich. So gaben z.B. schon Clopton und Spelman (1995) zu bedenken, dass die als selektiv berechnete tripolare (S3) Konfiguration nur für einen bestimmten Stimulationsstrombereich gültig ist. Hinzu kommt es bei simultaner Verwendung benachbarter Kanäle zu schmerzhaften Lautheitssummationen. Ursache hierfür sind einerseits die Überlagerung der durch die Elektroden stimulierten neuronalen Bereiche und andererseits die Wechselwirkungen von Strömen benachbarter Elektrodenkanäle. Diese Effekte führen nicht nur zu einer Verringerung der räumlichen Stimulationsauflösung, sondern auch zu einer Einschränkung der exakten Abbildung der Zeitstruktur innerhalb der einzelnen Stimulationskanäle.
Die Techniken und Grundlagen der elektrischen Stimulation von neuronalem Gewebe mit Kleinst-Vielkanalelektroden sind bisher kaum untersucht worden. Ziel dieser Arbeit war es, ein mathematisches Modell zu implementieren und Qualitätsparameter zu definieren, mit deren Hilfe die Verteilung des elektrischen Feldes und die daraus resultierende neuronale Erregung beschrieben und optimiert werden kann. Zur Verifizierung des Modells sollten Methoden und Techniken entwickelt werden, die eine hochauflösende Abtastung der elektrischen Felder und Messung der neuronalen Daten innerhalb eines Messsystems ermöglichen.
Bei der neuronalen Stimulation mit Kleinst-Vielkanalelektroden ergibt sich eine Reihe von Problemen grundsätzlicher Art. So werden bei elektrodenferner Stimulation größere Stimulationsströme benötigt als bei elektrodennaher Stimulation, wobei für den Strombedarf die Stimulationskonfiguration eine entscheidende Rolle spielt: Der S1 Stimulationsmodus benötigt weniger Strom zur Erreichung großer Stimulationstiefen als der S2 Stimulationsmodus. Der größte Strom wird mit zunehmendem Elektrodenabstand gleichermaßen von dem S3 und S7 Stimulationsmodus benötigt. Gleichzeitig verfügen Kleinst-Vielkanalelektroden bauartbedingt aber nur über kleine Elektrodenkontaktoberflächen und lassen daher auf Grund der kritischen Feldstärke nur geringe Stimulationsströme zu.
Ein weiteres Problem besteht bei diesen Kleinst-Elektrodendimensionen in der konkreten Lage der Neurone an denen eine neuronale Erregung evoziert wird. Die Dimension der Kleinst-Vielkanalelektroden liegt bei einem Elektrodenkanalkontaktdurchmesser von 70 µm bereits in der Größenordnung der zu stimulierenden Neurone mit einem Durchmesser von 10 bis 15 µm. Dies macht sich bei den Messungen besonders dann deutlich bemerkbar, wenn nicht der Stimulationsstrom die Größe des überschwelligen Bereichs modelliert, sondern wenn der Elektrodenkanalabstand durch die Wahl der entsprechenden Elektrodenkanäle verändert wird. Hier weisen zwar die meisten neuronalen Antworten noch in die sich aus dem Modell ergebende Richtung, jedoch kommt es zu einer höheren Streuung der Ergebnisse als bei Messungen mit der Folienelektrode, die eine Kontaktfläche von 170 µm besitzt.
Es gibt also eine Reihe von begrenzenden Faktoren bei der optimalen Dimensionierung der Stimulationselektrode, die sowohl abhängig von der physiologischen Topologie ist als auch von den eingesetzten Stimulationskonfigurationen. Es ist also zur Stimulation die Wahl der optimalen Codierstrategie und die richtige Dimensionierung der Stimulationselektrode sowie der Elektrodenkanalabstände von entscheidender Bedeutung.
Die neuronalen Messungen wurden erstmalig für diese Fragestellung am Hirnschnitt durchgeführt, da sie, im Gegensatz zu in-vivo Versuchen, eine exakte Positionierung der Elektroden auf dem Hirnschnitt unter Sichtkontrolle durch das Mikroskop erlauben. Es wurden aus den neuronalen Messungen die Amplituden und Latenzen der exzitatorischen postsynaptischen Potenziale (EPSP) sowie der Feldpotenziale ausgewertet.
Der Versuchsaufbau macht es möglich, die Potenzialfelder mit genau den Konfigurationen abzutasten, mit denen auch die neuronalen Messungen des Hirnschnittes durchgeführt wurden. Das implementierte Programm zur Berechnung der Feldverteilung besitzt zum Messprogramm ein Interface, so dass es möglich ist, die Einstellungen des Experimentes, wie Stimulationskonfigurationen, Abtastraster des Feldes und die Koordinaten des Messraums, in der Modellrechnung zu verwenden. Somit ist ein direktes Vergleichen zwischen Messung und Berechnung möglich. In nachfolgenden Arbeiten können die vorliegenden Ergebnisse als Grundlage für in-vivo Versuche eingesetzt werden.
Zur Durchführung der Messungen wurden sehr kleine Elektroden aus eigener Herstellung verwendet und es wurden uns freundlicherweise neu entwickelte Folienelektroden des Fraunhofer Instituts St. Ingbert zur Verfügung gestellt. Die Größe der verwendeten Kleinst-Vielkanalelektroden aus eigener Herstellung lag um ca. eine Zehnerpotenz unter den aktuell eingesetzten Elektrodentypen und ist speziell für den direkten Kontakt zwischen Elektrode und Gewebe konzipiert. Dies entspricht dem typischen Einsatzbereich von Hirnstammimplantaten. Dies ist auch notwendig, um eine maximale räumliche Separation der erzeugten Felder zu ermöglichen. Außerdem erlaubte das Elektrodendesign auf Grund der hohen Anzahl der Elektrodenkanäle und durch variieren der Konfigurationen die Feldrichtung zu bestimmen, ohne die Elektrode neu auf den Hirnschnitt aufsetzen zu müssen.
Der in dieser Arbeit implementierte Algorithmus zur Berechnung der Feldverteilungen und die eingeführten Qualitätsparameter erlauben, die unterschiedlichen Stimulationskonfigurationen miteinander zu vergleichen und zu optimieren. Die Ergebnisse aus diesen Modellrechnungen wurden sowohl mit den Messungen der elektrischen Felder als auch mit den Ergebnissen aus den neuronalen Antworten verglichen.
Der im Rahmen dieser Arbeit erstellte Versuchsaufbau bestand aus einer über mehrere Mikromanipulatoren getriebene mikrometergenaue Positioniereinrichtung. Es konnten sowohl die Stimulationselektrode als auch die Elektrode zur Aufzeichnung der neuronalen Daten gesteuert werden. Die Steuerung des gesamten Setup, d.h. die Positionierung, die Aufzeichnung der neuronalen Daten und die Generierung der Stimulationsmuster wurde über den zentralen Messrechner durch ein hierfür entwickeltes Computerprogramm gesteuert. Die Versuche wurden über ein inverses Mikroskop durch eine CCD-Kamera aufgezeichnet.
Der entscheidende Vorteil des in dieser Arbeit gewählten Modellansatzes besteht in der grundsätzlichen Beschreibung der Feldverteilung bei vielkanaliger Stimulation, so dass diese auch auf andere Elektrodenformen bzw. Konfigurationen und Dimensionen übertragbar ist. Es lassen sich so den verschiedenen Konfigurationen nach bestimmten Qualitätskriterien bewerten und an die jeweilige Zielrichtung der Stimulation anpassen. Die berechneten Felder konnten erfolgreich in der Messeinrichtung generiert und nachgemessen werden. Außerdem ist es gelungen, differenzierte neuronale Aktivitäten auszuwerten, welche die Aussagen des Modells abstützen.
The equation of state (EoS) of matter at extremely high temperatures and densities is currently not fully understood, and remains a major challenge in the field of nuclear physics. Neutron stars harbor such extreme conditions and therefore serve as celestial laboratories for constraining the dense matter EoS. In this thesis, we present a novel algorithm that utilizes the idea of Bayesian analysis and the computational efficiency of neural networks to reconstruct the dense matter equation of state from mass-radius observations of neutron stars. We show that the results are compatible with those from earlier works based on conventional methods, and are in agreement with the limits on tidal deformabilities obtained from the gravitational wave event, GW170817. We also observe that the resulting squared speed of sound from the reconstructed EoS features a peak, indicating a likely convergence to the conformal limit at asymptotic densities, as expected from quantum chromodynamics. The novel algorithm can also be applied across various fields faced with computational challenges in solving inverse problems. We further examine the efficiency of deep learning methods for analyzing gravitational waves from compact binary coalescences in this thesis. In particular, we develop a deep learning classifier to segregate simulated gravitational wave data into three classes: signals from binary black hole mergers, signals from binary neutron star mergers, or white noise without any signals. A second deep learning algorithm allows for the regression of chirp mass and combined tidal deformability from simulated binary neutron star mergers. An accurate estimation of these parameters is crucial to constrain the underlying EoS. Lastly, we explore the effects of finite temperatures on the binary neutron star merger remnant from GW170817. Isentropic EoSs are used to infer the frequencies of the rigidly rotating remnant and are noted to be significantly lower compared to previous estimates from zero temperature EoSs. Overall, this thesis presents novel deep learning methods to constrain the neutron star EoS, which will prove useful in future, as more observational data is expected in the upcoming years.
Determination of the structure of complex I of Yarrowia lipolytica by single particle analysis
(2004)
Komplex I enthält ein Flavinmononukleotid sowie mindestens acht Eisen- Schwefel Zentren als redoxaktive Cofaktoren. Da ein wesentlicher Teil des mitochondrialen Genoms für Untereinheiten von Komplex I codiert, betrifft eine Vielzahl von mitochondrialen Erkrankungen diesen Enzymkomplex.
Komplex I wurde bisher aus Mitochondrien, Chloroplasten und Bakterien isoliert. Die Minimalform von Komplex I wird in Bakterien gefunden, wo er aus 14 (bzw 13 im Falle einer Genfusion) Untereinheiten besteht und eine Masse von etwa 550 kDa aufweist. Generell werden sieben hydrophile und sieben hydrophobe Untereinheiten mit über 50 vorhergesagten Transmembranhelices gefunden. Im Komplex I aus Eukaryoten wurde eine grössere Anzahl zusätzlicher, akzessorischer Untereinheiten nachgewiesen. Hier werden die sieben hydrophoben Untereinheiten vom mitochondrialen Genom codiert, während alle anderen Untereinheiten kerncodiert sind und in das Mitochondrium importiert werden müssen.
Die obligat aerobe Hefe Yarrowia lipolytica wurde als Modellsystem zur Untersuchung von eukaryotischem Komplex I etabliert. Die bisher am besten untersuchte Hefe Saccharomyces cerevisiae enthält keinen Komplex I. Hier wird die Oxidation von NADH durch eine andere Klasse von sogenannten alternativen NADH Dehydrogenasen durchgeführt. Auch Y. lipolytica enthält ein solches alternatives Enzym, das allerdings mit seiner Substratbindungsstelle zur Aussenseite der inneren Mitochondrienmembran orientiert ist. Durch molekularbiologische Manipulation konnte eine interne Version dieses Enzymes exprimiert werden, wodurch es möglich ist, letale Defekte in Komplex I Deletionsmutanten zu kompensieren. Mittlerweile wurden alle Voraussetzungen geschaffen, um kerncodierte Untereinheiten von Komplex I aus Y. lipolytica gezielt genetisch zu verändern. Die Proteinreinigung wird durch die Verwendung einer auf einem His-tag basierenden Affinitätsreinigung erheblich erleichtert...
The core of this work is represented by the investigation of the chiral phase transition, using Monte Carlo simulations and unimproved staggered fermions, both in the weak and strong coupling regimes of Quantum Chromodynamics. Based on recent results from Monte Carlo simulations, both using unimproved staggered fermions and Wilson fermions, the chiral phase transition in the continuum and chiral limit shows compatibility with a second-order phase transition for Nf (number of flavours) in range [2:7], at zero baryon chemical potential. This achievement relies on the analytic continuation of Nf to non-integer values on the lattice, which allows to make use of extrapolation techniques to the chiral limit, where simulations are not possible. Furthermore, these results provide a resolution to the ambiguous scenario for Nf = 2 in the chiral limt. The first part of this thesis is devoted to the investigation of the chiral phase transition when a non-zero imaginary baryon chemical potential is involved, whose value corresponds to the 81% of the Roberge-Weiss one. Using the same extrapolation techniques aforementioned, the order of the chiral phase transition in the continuum and chiral limit shows compatibility with a second-order phase transition for Nf in range [2:6], highlighting a lack of dependence of the order of the chiral phase transition on the imaginary baryon chemical potential value. The second part of this thesis is about the study of the extension of the first-order chiral region in the strong coupling regime, at zero baryon chemical potential. Using Monte Carlo techniques, this can be done by investigating the Z2 boundary on a coarse lattice, whose temporal extent reads Nt = 2, and simulations are realised for Nf = 4, 8. The results in the weak coupling regime show, for $Nt = 8, 6, 4 and fixed Nf value, an inflating first-order chiral region. As in the strong coupling limit a second-order chiral phase transition is expected, the first-order chiral region has to shrink as the strong coupling regime is approached, resulting in a non-monotonic behaviour of the Z2 boundary. For Nf = 8, a critical mass on the Z2 boundary has been obtained, confirming the expected non-monotonic behaviour. For Nf = 4 the results do not provide a unique conclusion: Either a Z2 boundary at extremely low bare quark mass or a second-order chiral phase transition in the O(2) universality class in the chiral limit can take place. In addition to the two main topics, the performances of the second-order minimum norm integrator (2MN) and the fourth-order minimum norm integrator (4MN) have been compared, after implementing the 4MN one in the CL2QCD code used to realise our simulations. The 2MN integrator had already been implemented in the code since the first version was released. The two integrators belong to the class of symplectic integrators and represent an essential component of the RHMC algorithm, involved in our investigation. This step is extremely important, in order to guarantee the best quality when collecting data from simulations, and the results of the comparison suggested to favor the 2MN integrator, for both the topics.
A powerful technique to distinguish the enantiomers of a chiral molecule is the Coulomb Explosion Imaging (CEI). This technique allows us to determine the handedness of a single molecule. In CEI, the molecule becomes charged by losing many electrons in a very short period of time by interacting with the light. The repulsion forces between the positive charged particles of the molecule leads the molecule to break into parts-fragments. By measuring the three vector momentum of (at least) four fragments, the handedness observable can be determined. In this thesis, CEI is induced by absorption of a single high energy photon, which creates an inner-shell hole (K shell) of the molecule. The subsequent cascade of Auger decays lead to fragmentation. We decided to work with the formic acid molecule in this thesis. Two different experiments were conducted. The first experiment focused on exciting electrons to different energy states, while the second experiment focused on extracting directly a photoelectron to the continuum and measure the angular distribution of the photoelectron in the molecular frame. The primary goal was to search for chiral signal in a pure achiral planar molecule under the previous electron processes. The results of these findings were further implemented to two more molecules.
In the framework of the LHC Injectors Upgrade Project (LIU), the CERN Proton Synchrotron Booster (PSB) went through major upgrades resulting in new effects to study, challenges to overcome and new parameter regimes to explore. To assess the achievable beam brightness limit of the machine, a series of experimental and computational studies in the transverse planes were performed. In particular, the new injection scheme induces optics perturbations that are strongly enhanced near the half-integer resonance. In this thesis, methods for dynamically measuring and correcting these perturbations and their impact on the beam performance will be presented. Additionally, the quality of the transverse beam distributions and strategies for improvement will be addressed. Finally, the space charge effects when dynamically crossing the half-integer resonance will be characterized. The results of these studies and their broader significance beyond the PSB will be discussed.
This thesis provides a detailed derivation of dissipative spin hydrodynamics from quantum field theory for systems composed of spin-0, spin-1/2, or spin-1 particles.
The Wigner function formalism is introduced for quantum fields in the respective representations of the Poincaré group, and the conserved currents, i.e., the energy-momentum tensor and the total angular momentum tensor, in various so-called pseudogauges are derived. An expansion around the semiclassical limit in powers of the Planck constant is performed.
Subsequently, kinetic equations are obtained for binary elastic scattering, using both the de Groot-van Leeuwen-van Weert and Kadanoff-Baym method, with the latter retaining the effect of quantum statistics. The resulting collision term features both local and nonlocal contributions, with the latter providing a relaxation mechanism for the spin degrees of freedom of the quasiparticles. The local-equilibrium distribution function is derived from the requirement that the local part of the collision term vanishes.
From quantum kinetic theory, dissipative spin hydrodynamics is then constructed via the method of moments, extended to particles with spin. The system of moment equations is closed via the Inverse-Reynolds Dominance (IReD) approach, resulting in a set of equations of motion describing the evolution of both ideal and dissipative degrees of freedom. The application to polarization phenomena relevant to heavy-ion collisions is discussed.
In this thesis, the flow coefficients vn of the orders n = 1 − 6 are studied for protons and light nuclei in Au+Au collisions at Ebeam = 1.23 AGeV, equivalent to a center-of-mass energy in the nucleon-nucleon system of √sNN = 2.4 GeV. The detailed multi-differential measurement is performed with the HADES experiment at SIS18/GSI. HADES, with its large acceptance, covering almost full azimuth angle, combined with its high mass-resolution and good particle-identification capability, is well equipped to study the azimuthal flow pattern not only for protons, deuterons, and tritons but also for charged pions, kaons, the φ-mesons, electrons/positrons, as well as light nuclei like helions and alphas. The high statistics of more than seven billion Au-Au collisions recorded in April/May 2012 with HADES enables for the first time the measurement of higher order flow coefficients up to the 6th harmonic. Since the Fourier coefficient of 7th and 8th order are beyond the statistical significance only an upper bound is given. The Au+Au collision system is the largest reaction system with the highest particle multiplicities, which was measured so far with HADES. A dedicated correction method for the flow measurement had to be developed to cope with the reconstruction in-efficiencies due to occupancies of the detector system. The systematical bias of the flow measurement is studied and several sources of uncertainties identified, which mainly arise from the quality selection criteria applied to the analyzed tracks, the correction procedure for reconstruction inefficiencies, the procedures for particle identification (PID) and the effects of an azimuthally non-uniform detector acceptance. The systematic point-to-point uncertainties are determined separately for each particle type (proton, deuteron and triton), the order of the flow harmonics vn, and the centrality class. Further, the validity of the results is inspected in the range of their evaluated systematic uncertainties with several consistency checks. In order to enable meaningful comparisons between experimental observations and predictions of theoretical models, the classification of events should be well defined and in sufficiently narrow intervals of impact parameter. Part of this work included the implementation of the procedure to determine the centrality and orientation of the reaction.
In the conclusion the experimental results are discussed, including various scaling properties of the flow harmonics. It is found that the ratio v4/v2 for protons and light nuclei (deuterons and tritons) at midrapidity for all centrality classes approaches values close to 0.5 at high transverse momenta, which was suggested to be indicative for an ideal hydrodynamic behaviour. A remarkable scaling is observed in the pt dependence of v2 (v4) at mid-rapidity of the three hydrogen isotopes, when dividing by their nuclear mass number A (A^2) and pt by A. This is consistent with naive expectations from nucleon coalescence, butraises the question whether this mass ordering can also be explained by a hydrodynamical-inspired approach, like the blast-wave model. The relation of v2 and v4 to the shape of the initial eccentricity of the collision system is studied. It is found that v2 is independent of centrality for all three particle species after dividing it by the averaged second order participant eccentricity v2/⟨ε2⟩. A similar scaling is shown for v4 after division by ⟨ε2⟩^2.
This thesis contains three theoretical works about certain aspects of the interplay of electronic correlations and topology in the Hubbard model.
In the first part of this thesis, the applicability of elementary band representations (EBRs) to diagnose interacting topological phases, that are protected by spatial symmetries and time-reversal-symmetry, in terms of their single-particle Matsubara Green’s functions is investigated. EBRs for the Matsubara Green’s function in the zero-temperature limit can be defined via the topological Hamiltonian. It is found that the Green’s function EBR classification can only change by (i) a gap closing in the spectral function at zero frequency, (ii) the Green’s function becoming singular i.e. having a zero eigenvalue at zero frequency or (iii) the Green’s function breaking a protecting symmetry. As an example, the use of the EBRs for Matsubara Green’s functions is demonstrated on the Su-Schriefer-Heeger model with exact diagonalization.
In the second part the Two-Particle Self-Consistent approach (TPSC) is extended to include spin-orbit coupling (SOC). Time-reversal symmetry, that is preserved in the presence of SOC, is used to derive new TPSC self-consistency equations including SOC. SOC breaks spin rotation symmetry which leads to a coupling of spin and charge channel. The local and constant TPSC vertex then consists of three spin vertices and one charge vertex. As a test case to study the interplay of Hubbard interaction and SOC, the Kane-Mele-Hubbard model is studied. The antiferromagnetic spin fluctuations are the leading instability which confirms that the Kane-Mele-Hubbard model is an XY antiferromagnet at zero temperature. Mixed spin-charge fluctuations are found to be small. Moreover, it is found that the transversal spin vertices are more strongly renormalized than the longitudinal spin vertex, SOC leads to a decrease of antiferromagnetic spin fluctuations and the self-energy shows dispersion and sharp features in momentum space close to the phase transition.
In the third part TPSC with SOC is used to calculate the spin Hall conductivity in the Kane-Mele-Hubbard model at finite temperature. The spin Hall conductivity is calculated once using the conductivity bubble and once including vertex corrections. Vertex corrections for the spin Hall conductivity within TPSC corresponds to the analogues of the Maki-Thompson contributions which physically correspond to the excitation and reabsorption of a spin, a charge or a mixed spin-charge excitation by an electron. At all temperatures, the vertex corrections show a large contribution in the vicinity of the phase transition to the XY antiferromagnet where antiferromagnetic spin fluctuations are large. It is found that vertex corrections are crucial to recover the quantized value of −2e^2/h in the zero-temperature limit. Further, at non-zero temperature, increasing the Hubbard interaction leads to a decrease of the spin Hall conductivity. The results indicate that scattering of electrons off antiferromagnetic spin fluctuations renormalize the band gap. Decreasing the gap can be interpreted as an effective increase of temperature leading to a decrease of the spin Hall conductivity.
In this work I investigate two different systems - spin systems and charge-density-waves. The same theoretical method is used to investigate both types of system. My investigations are motivated by experimental investigations and the goal is to describe the experimental results theoretically. For this purpose I formulate kinetic equations starting from the microscopical dynamics of the systems.
First of all, a method is formulated to derive the kinetic equations diagrammatically. Within this method an expansion in equal-time connected correlation functions is carried out. The generating functional of connected correlations is employed to derive the method.
The first system to be investigated is a thin stripe of the magnetic insulator yttrium-iron-garnet (YIG). Magnons are pumped parametrically with an external microwave field. The motivation of my theoretical investigations is to explain the experimental observations. In a small parameter range close to the confluence field strength where confluence processes of two parametrically pumped magnons with the same wave vector becomes kinematically possible the efficiency of the pumping is reduced or enhanced depending on the pumping field strength. Because it is expected that that confluence and splitting processes of magnons are essential for the experimental observations I go beyond the kinetic theories that are conventionally applied in the context of parametric excitations in YIG and investigate the influence of cubic vertices on the parametric instability of magnons in YIG.
Furthermore, the influence of phonons is investigated. Usually in the literature these are taken into account as heat bath. Here, I want to explain experiments where an accumulation of magnetoelastic bosons - magnon-phonon-quasi-particles - has been observed. I employ the method of kinetic equations to investigate this phenomenon theoretically. The kinetic theory is able to reproduce the experimental observations and it is shown that the accumulation of magnetoelastic bosons is purely incoherent.
Finally, charge-density waves (CDW) in quasi-one-dimensional materials will be investigated. Charge-density waves emerge from a Peierls-instability and are a prime example for spontaneous symmetry breaking in solids. Again, the motivation for my theoretical investigations are an experiment where the spectrum of amplitude and phase phonon modes has been measured. Starting from the Fröhlich-Hamiltonian I derive kinetic equations and from these kinetic equations the equations of motion for the CDW order parameter can be derived. The frequencies and damping rates of amplitude and phase phonon modes will be derived from the linearized equations of motion. I compare my theory with existing methods. Furthermore, I also investigate the influence of Coulomb interaction.
This thesis investigates exotic phases within effective models for strongly interacting matter.
The focus lies on the chiral inhomogeneous phase (IP) that is characterized by a spontaneous breaking of translational symmetry and the moat regime, which is a precursor phenomenon exhibiting a non-trivial mesonic dispersion relation.
These phenomena are expected to occur at non-zero baryon densities, which is a parameter region that is mostly non-accessible to first-principle investigations of Quantum chromodynamics (QCD).
As an alternative approach, we consider the Gross-Neveu (GN) and Nambu-Jona-Lasinio (NJL) model within the mean-field approximation, which can be regarded as effective models for QCD.
We focus on two aspects of the moat regime and the IP in these models.
First, we investigate the influence of the employed regularization scheme in the (3+1)-dimensional NJL model, which is nonrenormalizable, i.e., the regulator cannot be removed.
We find that the moat regime is a robust feature under change of regularization scheme, while the IP is sensitive to the specific choice of scheme.
This suggests that the moat regime is a universal feature of the phase diagram of the NJL model, while the IP might only be an artifact of the employed regulator.
Second, we study the influence of the number of spatial dimensions on the emergence of the IP.
To this end, we investigate the GN model in noninteger spatial dimensions d.
We find that the IP and the moat regime are present for d < 2, while they are absent for d > 2.
This demonstrates the central role of the dimensionality of spacetime and illustrates the connection of previously obtained results in this model in integer number of spatial dimensions.
Moreover, this suggests that the occurrence of these phenomena in three spatial dimensions is solely caused by the finite regulator.
In summary, this thesis contributes to advancing our understanding of the phase structure of QCD, particularly regarding the existence and characteristics of inhomogeneous phases and the moat regime.
Even though the investigations are performed within effective models, they provide valuable insight into the aspects that are crucial for the formation of an inhomogeneous chiral condensate in fermionic theories.
By combining two unique facilities at the Gesellschaft fuer Schwerionenforschung (GSI), the Fragment Separator (FRS) and the Experimental Storage Ring (ESR), the first direct measurement of a proton capture reaction of stored radioactive isotopes was accomplished. The combination of well-defined ion energy, an ultra-thin internal gas target, and the ability to adjust the beam energy in the storage ring enables precise, energy-differentiated measurements of the (p,gamma) cross sections. The new setup provides a sensitive method for measuring (p,gamma) reactions relevant for nucleosynthesis processes in supernovae, which are among the most violent explosions in the universe and are not yet well understood. The cross sections of the 118Te(p,gamma) and 124Xe(p,gamma) reactions were measured
at energies of astrophysical interest. The heavy ions were stored with energies of 6 MeV/nucleon and 7 MeV/nucleon and interacted with a hydrogen gas-jet target.
The produced proton-capture products were detected with a double-sided silicon strip detector. The radiative recombination process of the fully stripped ions and electrons from the hydrogen target was used as a luminosity monitor.
Additionally, post-processing nucleosynthesis simulations within the NuGrid [1] research platform have been performed. The impact of the new experimental results on the p-process nucleosynthesis around 124Xe and 118Te in a core-collapse supernova was investigated. The successful measurement of the proton capture cross sections of radioactive isotopes rises the motivation to proceed with experiments in lower energy regions.
[1] M. Pignatari and F. Herwig, “The nugrid research platform: A comprehensive simulation approach for nuclear astrophysics,” Nuclear Physics News, vol. 22, no. 4, pp. 18–23, 2012.
In this thesis, we present a detailed consideration of both qualitative and quantitative properties of static spherically symmetric solutions of the Einstein equations with self-interacting scalar fields. Our focus is on solutions with naked singularities. We study the qualitative properties of the solutions of the Einstein equations with real static self-interacting $N$ scalar fields, making some assumptions on self-interaction. We provide a rigorous proof that the corresponding solutions will be regular up to $r=0$. Furthermore, we find the rigorous form of asymptotic solutions near the singularity and at spatial infinity. We construct some examples of spherical-like naked singularities at $r=r_s\neq0$ in curvature coordinates.
We analyze the stability of the previously considered solutions against odd-parity gravitational perturbations and also examine the fundamental quasi-normal modes spectra. For the general class of the self-interaction potential, we demonstrate well-posedness of the initial problem and stability for positively defined potentials. As an example, we numerically study the case of the scalar field with power-law self-interaction potential and find the fundamental quasi-normal modes frequencies. We demonstrate that they differ from the standard Schwarzschild black hole case.
We study in detail the motion of particles in the vicinity of previously considered solutions. Mainly, we are interested in considering properties of the distribution of stable circular orbits around the corresponding configurations and images of the accretion disk for a distant observer. For all cases, we find possible types of stable circular orbit distributions and domains of parameters where they are realized.
We also demonstrate that the presence of self-interaction can lead to a new type of circular orbit distributions, which is absent in the linear massless scalar field case. We build Keplerian disk images in the plane of a distant observer and demonstrate the possibility to mimic the shadows of black holes.
In this thesis, the early time dynamics in a heavy ion collision of Pb-Nuclei at LHC center-of-mass energies of 5 TeV is studied. Right after the collision the system is out-of-equilibrium and essentially gluon dominated, with their density saturating at a specific momentum scale Q_s. Based on a separation of scales for the soft and hard gluonic degrees of freedom, the initial state is given from an effective model, known as the Color Glass Condensate. Within this model, the soft gluons behave classical to leading order, making it possible to study their dynamics in gauge invariant fashion on a three dimensional lattice, solving Hamiltonian field equations of motion, keeping real time. Quark-Antiquark pairs are produced in the gluonic medium, known as the Glasma and manifest themselves as a source of quantum fluctuations.
They enter the dynamics of the gluons as a current, making the system semi-classical. In lattice simulations, the non-equilibrium system is tested for pressure isotropization, which is a necessary ingredient to reach a local thermal equilibrium (LTE), making a hydrodynamical description at a later stage possible. In addition, the occupation of energy modes is studied with its implications on thermalization and classicality.
Das Experiment ALICE (A Large Ion Collider Experiment) am CERN (Conseil Européen pour la Recherche Nucléaire) LHC (Large Hadron Collider) fokussiert sich auf die Untersuchung stark wechselwirkender Materie unter extremen Bedingungen. Solche Bedingungen existierten wenige Mikrosekunden nach dem Urknall, als die Temperaturen so hoch waren, dass Partonen (Quarks und Gluonen) nicht zu farbneutralen Hadronen gebunden waren. In solch einem Quark-Gluon-Plasma können sich die Partonen frei bewegen, wobei sie allerdings mit anderen Partonen aus dem Medium stark wechselwirken. Am LHC werden Bleikerne auf ultra-relativistische Energien von bis zu 2.68 TeV beschleunigt und zur Kollision gebracht, wobei für weniger als 10 fm/c ein QGP entsteht, das schnell expandiert. Die Partonen hadronisieren, wenn das QGP sich auf Temperaturen von weniger als der Phasenübergangstemperatur von ≈155MeV abkühlt. Die finalen Teilchen- und Impulsverteilungen werden werden vom ALICE Detektor gemessen und geben Aufschluss auf elementare Prozesse im QGP.
Die TPC (Time Projection Chamber ) ist eines der wichtigsten Detektorsysteme von ALICE. Sie trägt maßgeblich zur Rekonstruktion von Teilchenspuren und zur Identifikation der Teilchensorten bei mittleren Rapiditäten bei. Die TPC ist eine große zylindrische Spurendriftkammer und besteht aus einem 88mˆ3 großen Gasvolumen, das von der zentralen Hochspannungselektrode in zwei Seiten geteilt wird. Durchquert ein Teilchen das Gasvolumen, ionisiert es entlang seiner Spur eine spezifische Menge von Gasatomen. Die Ionisationselektronen driften entlang des extrem homogenen elektrischen Feldes zu den Auslesekammern an den Endkappen auf beiden Seiten der TPC. Die Messung der Position und der Menge der Ionisationselektronen erlaubt die Rekonstruktion der Teilchenspur sowie, in Kombination mit der Impulsmessungen über die Krümmung der Teilchenspur im Magnetfeld, die Bestimmung der Teilchensorte über den spezifischen Energieverlust pro Wegstrecke im Gas. Das Gasvolumen der TPC war in LHC Run 1 (2010–2013) mit Ne-CO_2 (90-10) gefüllt. Die Gasmischung wurde zu Ar-CO_2 (88-12) für Run 2 (2015–2018) geändert. Als Auslesekammern wurden Vieldrahtproportionalkammern verwendet, die aus einer segmentierten Ausleseebene, einer Anodendrahtebene, einer Kathodendrahtebene und einem Gating-Grid (GG) bestehen. Das GG is eine zusätzliche Drahtebene, die durch zwei verschiedene Spannungseinstellungen transparent oder undurchlässig für Elektronen und positive Ionen geschaltet werden kann.
In den ersten Daten von Run 2 bei hohen Interaktionsraten wurden große Verzerrungen der gemessenen Spurpunkte beobachtet, die auf Grund von Verzerrungen des Driftfeldes auftreten und nicht von Daten aus Run 1 bekannt waren. Diese Verzerrungen treten nur sehr lokal an den Grenzen von manchen der inneren Auslesekammern (IROCs) auf. Zudem wurden auch große Verzerrungen in einer (C06) der äußeren Auslesekammern (OROCs) festgestellt, die sich bei einem bestimmten Radius über die ganze Breite der Kammer erstrecken. Die Ergebnisse dieser Arbeit befassen sich mit der Untersuchung jener Verzerrungen und ihrer Ursache, sowie mit der Entwicklung von Strategien um die Verzerrungen zu minimieren.
Messungen der Verzerrungen in den IROCs und Vergleiche mit Simulationen lassen darauf schließen, dass die Verzerrungen von positiver Raumladung hervorgerufen werden, die durch Gasverstärkung an sehr begrenzten Regionen der Auslesekammern entsteht und sich durch das Driftvolumen bewegt. Es werden charakteristische Abhängigkeiten von der Interaktionsrate sowie systematische Veränderungen bei Umkehrung der Orientierung des Magnetfeldes gemessen. Eine erneute Analyse von Run 1 Daten mit den Methoden aus Run 2 zeigt, dass die Verzerrungen bereits in Run 1 auftraten, jedoch durch die Ne-Gasmischung und niedrigere Interaktionsraten um eine Größenordnung kleiner waren. Neue Daten aus Run 2, für die die Gasmischung zeitweise wieder von Ar-CO_2 zu Ne-CO_2- N_2 geändert wurde, bestätigen die Ergebnisse der Run 1 Datenanalyse. Der Ursprung der Raumladung wird systematisch eingegrenzt. Es werden einzelne IROCs identifiziert, an deren Anodendrähten die Raumladung entsteht. Physikalische Modelle ermöglichen es, die Entstehung der Raumladung auf das Volumen zurückzuführen, das sich zwischen zwei IROCs befindet. Damit besteht die Vermutung, dass einzelne Spitzen von Anodendrähten am äußeren Rand dieser IROCs in das Gasvolumen hineinragen und somit hohe elektrische Felder erzeugen, an denen Gasverstärkung stattfindet. Die positiven Ionen können dann ungehindert in das Driftvolumen gelangen. Um diesen Effekt zu unterdrücken, wird das Potential der Cover-Elektroden angepasst, die sich auf den Befestigungsvorrichtungen der Drahtebenen an den Kammerrändern befinden. Dadurch kann die Menge von Ionisationselektronen, die in das Volumen zwischen zwei IROCs hineindriftet und vervielfacht wird, eingeschränkt werden. Über elektro-statische Simulationen und Messungen wird eine Einstellung für das Cover-Elektroden-Potential gefunden, mit der die Verzerrungen auf 30 % reduziert werden können. Die Verzerrungen in OROC C06 entstehen durch positive Ionen, die aus der Verstärkungsregion in das Driftvolumen gelangen, da an dieser bestimmten Stelle zwei aufeinanderfolgende GG-Drähte den Kontakt verloren haben. Die Verzerrungen werden um mehr als einen Faktor 3 reduziert, indem die Hochspannung der Anodendrähte um 50 V und somit der Gasverstärkungsfaktor um einen Faktor 2 verringert wird und indem das Potential der noch funktionierenden GG-Drähte erhöht wird.
Zusammenfassend konnten die lokalen Raumladungsverzerrungen für die letzte Pb−Pb Strahlzeit von Run 2 auf weniger als 1cm bei den höchsten Interaktionsraten verringert werden. Zudem wurde der Anteil des von Raumladungsverzerrungen betroffenen Volumens der TPC signifikant verringert, sodass die ursprüngliche Auflösung der Spurrekonstruktion wieder erreicht werden konnte.
Experiments on Vibrational Energy Transfer (VET) in proteins contribute to our understanding of fundamental biological processes such as allostery, dissipation of excess energy, and possibly enzymatic catalysis. While these processes have been studied for a long time, many questions remain unanswered. The aim of this work was to expand the application of existing spectroscopic techniques to investigate VET, seeking tailored solutions for the diversity of proteins and amino acid environments. Additionally, new target proteins were to be established to broaden the spectrum of VET experiments towards the role of VET and low-frequency protein modes (LFMs).
To test their suitability as VET sensors, the non-canonical amino acids (ncAAs) Azidoalanine (N3Ala), azido-L-Homoalanine (Aha), p-azido-Phenylalanine (N3Phe), p-cyano-Phenylalanine (CNPhe), and 4-cyano-Tryptophan (CNTrp) were coupled to the VET donor β-(1-azulenyl)-L-Alanine (AzAla) in dipeptides. Their spectral properties were compared using FTIR and VET spectra in H2O, dimethyl sulfoxide, and tetrahydrofuran.
The solvent strongly influences the measured VET signals, which can be explained by the direct interaction of the solvent with the dipeptides. Additionally, the peak time within the subgroups of azide and nitrile sensors increased with the size of the side chain, indicating the dependence between peak time and the distance between VET donor and sensor. When incorporated into a protein, solvent interactions are less dominant. Therefore, Aha, N3Phe, and CNPhe were additionally incorporated at two different positions in the PDZ protein domain and investigated. Due to Fermi resonances, signals from azide sensors are challenging to predict, unlike those of the nitrile sensors.
Overall, the experiments showed that nitrile groups can serve well as VET sensors, as their lower extinction coefficient is compensated for by a narrower bandwidth. This expands the number of potential target proteins, and sensor incorporation can be less disruptive at various protein locations.
Since the VET donor AzAla can inject the energy of a photon into a protein as vibrational energy at a specific location, it can also be used for the targeted excitation of LFMs. If these modes are involved in an enzymatic reaction, a direct influence on activity is expected. This hypothesis has long existed but has not been definitively verified. Some studies have found evidence for the involvement of LFMs in formate dehydrogenase (FDH) catalysis. Therefore, FDH was chosen for the investigation of LFMs in enzymes. This specific system additionally allows the use of a natural VET sensor: it forms a stable complex with NAD+ and N3-, an excellent IR marker. Thus, it provided the opportunity to test low-molecular-weight non-covalent ligands as VET sensors.
After ensuring sufficient AzAla supply through the internal establishment of an enzymatic synthesis, AzAla could be incorporated at various positions in FDH. Despite spectral overlap between free and bound N3-, the latter could be identified by its narrower FWHM. For some variants, no binding could be observed. Circular dichroism spectra showed that these variants structurally deviate slightly from other variants and the wild type (WT). VET could be observed over 22 Å from two regions of the protein to the N3- bound in the active center, at protein concentrations of below 2 mM. Unbound N3- did not generate signals, allowing it to be added in excess ensuring the saturation of the protein in VET experiments.
The activity of FDH WT and four AzAla mutants was investigated under substrate saturation without and with AzAla excitation. In these experiments, a slight reduction in activity under illumination was observed, even for the WT, who is not expected to interact with the excitation light. So far, a difference in sample temperature cannot be excluded as the cause for this decline.
The presented experiments with FDH illustrate the potential of low-molecular-weight ligands as VET sensors, with N3- being particularly attractive due to its simple structure (preventing Fermi resonances) and its high extinction coefficient. Its use can add many metalloproteins as potential targets for VET experiments and allows investigation without a VET sensor ncAA. Additionally, initial experiments were conducted to measure light-dependent FDH activity. By specifically exciting protein LFMs, this project could contribute in the future to answering longstanding questions about the extraordinary catalytic efficiency of enzymes.
Binary neutron star mergers represent unique observational phenomena because all four fundamental interactions play an important role at various stages of their evolution by leaving imprints in astronomical observables. This makes their accurate numerical modeling a challenging multiphysics problem that promises to increase our understanding of the high-energy astrophysics at play, thereby providing constraints for the underlying fundamental theories such as the gravitational interaction or the strong interaction of dense matter. For example, the first and so far only multi-messenger observation of the binary neutron star merger GW170817 resulted in numerous bounds on the parameters of isolated non-rotating neutron stars, e.g., their maximum mass or their distribution in radii, which can be directly used to constrain the equation of state of cold nuclear matter. While many of these results stem from the observation of the inspiral gravitational-wave signal, the postmerger phase of binary neutron star mergers encodes even more details about the extreme physics of hot and dense neutron star matter. In this Thesis we focus on the exploration of dissipative and shearing effects in binary neutron star mergers in order to identify novel approaches to constrain hot and dense neutron star matter.
The first effect is the well-motivated dissipation of energy due to the bulk viscosity which arises from violations of weak chemical equilibrium. We start by exploring the impact of bulk viscosity on black-hole accretion. This simplified problem gives us the opportunity to develop a test case for future codes taking into account the effects of dissipation in a fully general-relativistic setup and build intuition in the physics of relativistic dissipation. Next, we move on to isolated neutron stars and binary neutron star mergers by developing a robust implementation of bulk-viscous dissipation for numerical relativity simulations. We test our implementation by calculating the damping of eigenmodes of isolated neutron stars and the violent migration scenario. Finally, we present the first results on the impact of bulk viscosity on binary neutron star mergers. We identify a number of ways how bulk viscosity impacts the postmerger phase, out of which the suppression of gravitational-wave emission and dynamical mass ejection are the most notable ones.
In the last part of this Thesis we investigate how the shearing dynamics at the beginning of the merger affects the amplification of different initial magnetic-field topologies. We explore the hypothesis that magnetic fields which are located only in a small region near the stellar surface prior to merger lead to a weaker magnetic-field amplification. We show first evidence which confirms this hypothesis and discuss possible implications for constraining the physics of superconduction in cold neutron stars.
This work focuses on the investigation of K+, K- and ϕ-meson production in Ag(1.58 A GeV)+Ag collisions. The energetically cheapest channel for direct K+ production in binary NN-collisions NN→NΛK+ lies at exactly this energy. For the remaining K- and ϕ-mesons, an excess energy of 0.31 GeV and 0.34 GeV in the centre of mass system has to be provided by the system. This makes these particles an excellent probe for effects inside the medium.
K+ and K- mesons can be reconstructed directly as they possess a cτ of approximately 3.7 m. Using the approximately 3 billion recorded Ag(1.58 A GeV)+Ag 0-30% most central collision events, all reconstructed K+ and K- within the detector acceptance are investigated for their kinematic properties and their particle production rates compared to a selection of existing models.
The Compressed Baryonic Matter (CBM) is one of the core experiments at the future Facility for Anti-proton and Ion Research (FAIR), Darmstadt, Germany. Its goal is to investigate nuclear matter characteristics at high net-baryon densities and moderate temperatures. The Silicon Tracking System (STS) is a central detector system of CBM.
It is placed inside a 1Tm magnet and operated at a temperature of about −10 °C to keep radiation-induced bulk current in the 300μm double-sided microstrip silicon sensors low. The design of the STS aims to minimize the material budget in the detector acceptance (2.5° < θ < 25°). In order to do so, the readout electronics is placed outside the active area, and the analog signals are transported via ultra-thin micro-cables. The STS comprises eight tracking stations with 876 modules. Each module is assembled on a carbon fiber ladder, which is subsequently mounted in the C-shaped aluminum frame.
The scope of the thesis focused on developing a modular control system framework that can be implemented for different sizes of experimental setups. The developed framework was used for setups that required a remote operation, like the irradiation of the powering modules for the front-end electronics (FEE), but also in laboratory-based setups where the automation and archiving were needed (thermal cycling of the STS electronics).
The low voltage powering modules will be placed in the vicinity of the experiment, therefore they will experience a total dose of up to 40mGy over the 10 years of STS lifetime.
To estimate the effects of the radiation on the low-voltage module performance, a dedicated irradiation campaign took place. It aimed at estimating the rate of radiation induced soft errors, that lead to the switch off of the FEE.
Regular power cycles of multiple front-end boards (FEBs) pose a risk to the experiment operation. Firstly, such behavior could negatively influence the physics performance but also have deteriorating effects on the hardware. It was further assessed what are the limitations of the FEBs with respect to the thermal cycling and the mechanical stress. The results served as an indication of possible failure modes of the FEB at the end of STS lifetime. Failure modes after repeated cycles and potential reasons were determined (e.g., Coefficient of Thermal Expansion (CTE) difference between the materials).
Due to the conditions inside the STS efficient temperature and humidity monitoring and control are required to avoid icing or water condensation on the electronics or silicon sensors. The most important properties of a suitable sensor candidate are resilience to the magnetic field, ionizing radiation tolerance, and fairly small size.
A general strategy for ambient parameters monitoring inside the STS was developed, and potential sensor candidates were chosen. To characterize the chosen relative humidity sensors the developed control framework was introduced. A sampling system with a ceramic sensor and Fiber Optic Sensors (FOS) were identified as reliable solutions for the distributed sensing system. Additionally, the industrial capacitive sensors will be used as a reference during the commissioning.
Two different designs of FOS were tested: a hygrometer and 5 sensors multiplexed in an array. The FOS hygrometer turned out to be a more reliable solution. One of the possible reasons for a worse performance is a relatively low distance between the subsequent sensors (15 cm) and a thicker coating. The results obtained from the time response study pointed out that the thinner coating of about 15μm should be a good compromise between the humidity sensitivity and the time response.
The implementation of the containerized-based control system framework for the mSTS is described in detail. The deployed EPICS-based framework proved to be a reliable solution and ensured the safety of the detector for almost 1.5 years. Moreover, the data related to the performance of the detector modules were analyzed and significant progress in the quality of modules was noted. Obtained data was also used to estimate the total fluence, which was based on the leakage current changes.
The developed framework provided a unique opportunity to automate and control different experimental setups which provided crucial data for the STS. Furthermore, the work underlines the importance of such a system and outlines the next steps toward the realization of a reliable Detector Control System for STS.
In the last twenty years, a variety of unexpected resonances had been observed within the charmonium mass region. Although the existence of unconventional states has been predicted by the quantum chromodynamics (QCD), a quantum field theory describing the strong force, a clear evidence was missing. The Y(4260) is such an unexpected and supernummerary state, first observed at BaBar in 2005, and aroused great interest, because it couples much stronger to hidden charm decays (charm-anticharm states like J/Psi or h_c) instead of open charm decays (D meson pairs). This is unusual for states with masses above the D anti-D threshold. Furthermore, it decays into a charged exotic state Y(4260)->Z_c(3900)^+- pi^-+. The charge of the Z_c(3900)^+- is an indication that it comprises of two more quarks than the charm-anticharm pair, and could therefore be assumed to be a four-quark state. Due to these still not understood properties of these QCD-allowed states, they are referred to as exotic XYZ states to emphasize their particularity.
In 2017, the collaboration of the Beijing Spectrometer III (BESIII) investigated the production reaction of the Y(4260) resonance based on a high-luminosity data set. This significantly improved precision of the measurement of the cross-section sigma(e+e- -> J/Psi pi^+ pi^-) permitted a resolution into two resonances, the Y(4230) and the Y(4360). The Z_c(3900)^+- had been discovered by the BESIII collaboration in 2013, thus this experiment at the Beijing Electron-Positron Collider II (BEPCII) is a top-performing facililty to study exotic charmonium-like states.
In this work, an inclusive reconstruction of the strange hyperon Lambda in the charmonium mass region is performed to study possible decays of Y states in order to provide further insight into their nature. Finding more states or new decay channels may provide crucial hints to understand the strong interaction beyond nonperturbative approaches.
Three resonances are observed in the energy dependent cross-section: the first with a mass of (4222.01 +- 5.68) MeV and a width of (154.26 +- 28.16) MeV, the second with a mass of (4358.88 +- 4.97) MeV and a width of (49.58 +- 13.54) MeV and the third with a mass of (4416.41 +- 2.37) MeV and a width of (23.88 +- 7.18) MeV. These resonances, with a statistical significance Z > 5sigma, can be interpreted as the states Y(4230), Y(4360) and psi(4415).
Additionally, a proton momentum-dependent analysis strategy has been used in terms of the inclusiveness of the reconstruction and to address the momentum discrepancies between generic MC and measured data.
This Ph. D. thesis with the title "Characterisation of laser-driven radiation beams: Gamma-ray dosimetry and Monte Carlo simulations of optimised target geometry for record-breaking efficiency of MeV gamma-sources" is dedicated to the study of the acceleration of electrons by intense sub-picosecond laser pulses propagating in a sub-millimeter plasma with near-critical electron density (NCD) and resulting generation of the gamma bremsstrahlung and positrons in the targets of different materials and thickness.
Laser-driven particle acceleration is an area of increasing scientific interest since the recent development of short pulse, high-intensity laser systems. The interaction of intense high-energy, short-pulse lasers with solid targets leads to the production of high-energy electrons in the relativistic laser intensity regime of more than 1018 W /cm2. These electrons play the leading role in the first stage of the interaction of laser with matter, which leads to the creation of laser sources of particles and radiation. Therefore, the optimisation of the electron beam parameters in the direction of increasing the effective temperature and beam charge, together with a slight divergence, plays a decisive role, especially for further detection and characterisation of laser-driven photon and positron beams.
In the context of this work, experiments were carried out at the PHELIX laser system (Petawatt High-Energy Laser for Heavy Ion eXperiments) at GSI Helmholtz Center for Heavy-Ion Research GmbH in Darmstadt, Germany. This thesis presents a thermoluminescence dosimetry (TLD) based method for the measurement of bremsstrahlung spectra in the energy range from 30 keV to 100 MeV. The results of the TLD measurements reinforced the observed tendency towards the strong increase of the mean electron energy and number of super-ponderomotive electrons. In the case of laser interaction with long-scale NCD-plasmas, the dose caused by the gamma-radiation measured in the direction of the laser pulse propagation showed a 1000-fold increase compared to the high contrast shots onto plane foils and doses measured perpendicular to the laser propagation direction for all used combinations of targets and laser parameters.
In this thesis I present novel characterisation method using a combination of TLD measurements and Monte Carlo FLUKA simulations applicable to laser-driven beams. The thermoluminescence detector-based spectrometry method for simultaneous detection of electrons and photons from relativistic laser-induced plasmas initially developed by Behrens et al. (Behrens et al., 2003) and further applied in experiments at PHELIX laser (Horst et al., 2015) delivered good spectral information from keV energies up to some MeV, but as it was presented in (Horst et al., 2015) this method was not really suitable to resolve the content of photon spectra above 10 MeV because of the dominant presence of electrons. Therefore, I created new evaluation method of the incident electron spectra from the readings of TLDs. For this purpose, by means of MatLab programming language an unfolding algorithm was written. It was based on a sequential enumeration of matching data series of the dose values measured by the dosimeters and calculated with of FLUKA-simulations. The significant advantage of this method is the ability to obtain the spectrum of incident electrons in the low energy range from 1 keV, which is very difficult to measure reliably using traditional electron spectrometers.
The results of the evaluation of the effective temperature of super-ponderomotive electrons retrieved from the measured TLD-doses by means of the Monte-Carlo simulations demonstrated, that application of low density polymer foam layers irradiated by the relativistic sub-ps laser pulse provided a strong increase of the electron effective temperature from 1.5 - 2 MeV in the case of the relativistic laser interaction with a metallic foil up to 13 MeV for the laser shots onto the pre-ionized foam and more than 10 times higher charge carried by relativistic electrons.
The progressive simulation method of whole electron spectra described with two -temperatures Maxwellian distribution function has been developed and the results of dose simulations were compared with the acquired experimental data. The advanced feature of this method, which distinguishes it from the results of the simulation of the photon spectrum using the interaction with the target of mono-energetic electron beams (Nilgün Demir, 2013; Nilgün Demir, 2019) or the initial electron spectrum expressed as a function of one electron temperature (Fiorini, 2012), is the ability to simulate the initial electron spectrum described by the Maxwellian distribution function with two temperatures.
The important objective of this thesis was dedicated to the study and characterisation of laser-driven photon beams. In addition to this, the positron beams were evaluated. The investigation of bremsstrahlung photons and positrons spectra from high Z targets by varying the target thickness from 10 µm to 4 mm in simulated models of the interactions of electron spectra with Maxwellian distribution functions allowed to define an optimal thickness when the fluences of photons and positrons are maximal. Furthermore based on the results of FLUKA simulations the gold material was found to be the most suitable for the future experiments as e − γ target because of its highest bremsstrahlung yield.
Additionally Monte Carlo simulations were performed applying the obtained electron beam parameters from the electron acceleration process in laser-plasma interactions simulated with particle-in-cell (PIC) code for two laser energies of 20 J and 200 J. The corresponding electron spectra were imported into a Monte Carlo code FLUKA to simulate the production process of bremsstrahlung photons and positrons in Au converter. FLUKA simulations showed the record conversion of efficiency in MeV gammas can reach 10%, which reinforces the generation of positrons. The obtained results demonstrate the advantages of long-scale plasmas of near critical density (NCD) to increase the parameters of MeV particles and photon beams generated in relativistic laser-plasma interaction. The efficiency of the laser-driven generation of MeV electrons and photons by application of low-density polymer foams is essentially enhanced.
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
A synchrotron is a particular type of cyclic particle accelerator and the first accelerator concept to enable the construction of large-scale facilities [10], such as the largest particle accelerator in the world, the 27-kilometre-circumference Large Hadron Collider (LHC) by CERN near Geneva, Switzerland, the European Synchrotron Radiation Facility (ESRF) in Grenoble, France for the synchrotron radiation, the superconducting, heavy ion synchrotron SIS100 under construction for the FAIR facility at GSI, Darmstadt, Germany and so on. Unlike a cyclotron, which can accelerate particles starting at low kinetic energy, a synchrotron needs a pre-acceleration facility to accelerate particles to an appropriate initial value before synchrotron injection. A pre-acceleration can be realized by a chain of other accelerator structures like a linac, a microtron in case of electrons, for example, Proton and ion injectors Linac 4 and Linac 3 for the LHC, UNLAC as the injector for the SIS18 in GSI and in future the SIS18 as injector for the SIS100. The linac is a commonly used injector for the ion synchrotron and consists of some key components. The three main parts of a linac are: An ion source creating the particles, a buncher system or an RFQ followed by the main drift tube accelerator DTL. In order to meet the energy and the beam current requirement of a synchrotron injector linac, its cost is a remarkable percentage of the total facility costs.
However, the normal conducting linac operation at cryogenic temperatures can be a promising solution in improving the efficiency and reducing the costs of a linac. Synchrotron injectors operate at very low duty factor with beam pulse lengths in 1 micros to 100 micros range, as most of the time is needed to perform the synchrotron cycle. Superconducting linacs are not convenient, as they cannot efficiently operate at low duty factor and high beam currents.
The cryogenic operation of ion linacs is discussed and investigated at IAP in Frankfurt since around 2012 [1, 37]. The motivation was to develop very compact synchrotron injectors at reduced overall linac costs per MV of acceleration voltage. As the needed beam currents for new facilities are increasing as well, the new technology will also allow an efficient realization of higher injector linac energies, which is needed in that case. Operating normal conducting structures at cryogenic temperature exploits the significantly higher conductivity of copper at temperatures of liquid nitrogen and below. On the other hand, the anomalous skin effect reduces the gain in shunt impedance quite a bit[25, 31, 9]. Some intense studies and experiments were performed recently, which are encouraging with respect to increased field levels at linac operation temperatures between 30 K and 70 K [17, 24, 4, 23, 5, 8]. While these studies are motivated by applications in electron acceleration at GHz-frequencies, the aim of this paper is to find applications in the 100 to 700 MHz range, typical for proton and ion acceleration. At these frequencies, a higher impact in saving RF power is expected due to the larger skin depth, which is proportional to the frequency to the power of negative half with respect to the normal skin effect. On the other hand, it is assumed that the improvement in maximum surface field levels will be similar to what was demonstrated already for electron accelerator cavities. This should allow to find a good compromise between reduced RF power needs for achieving a given accelerator voltage and a reduced total linac length to save building costs.
A very important point is the temperature stability of the cavity surface during the RF pulse. This is of increasing importance the lower the operating temperature is chosen: the temperature dependence of the electric conductivity in copper gets rather strong below 80 K, as long as the RRR - value of the copper is adequate. It is very clear, that this technology is suited for low duty cycle operated cavities only - with RF pulse lengths below one millisecond. At longer pulses the cavity surface will be heated within the pulse to temperatures, where the conductivity advantage is reduced substantially. These conditions fit very well to synchrotron injectors or to pulsed beam power applications.
H – Mode structures of the IH – and of the CH – type are well-known to have rather small cavity diameters at a given operating frequency. Moreover, they can achieve effective acceleration voltage gains above 10 MV/m even at low beam energies, and already at room temperature operation[29]. With the new techniques of 3d – printing of stainless steel and copper components one can reduce cavity sizes even further – making the realization of complex cooling channels much easier.
Another topic are copper components in superconducting cavities – like power couplers. It is of great importance to know exactly the thermal losses at these surfaces, which can’t be cooled efficiently in an easy way.
Im Rahmen dieser Arbeit wurde ein verbessertes Buncher-System für Hochfrequenzbeschleuniger mit niedrigem und mittlerem Ionenstrom entwickelt. Die entwickelte Methodik hat ermöglicht, ein effektives, vereinfachtes Buncher-System für die Injektion in HF-Beschleuniger wie RFQs, Zyklotrons, DTLs usw. zu entwerfen, welches kleine Ausgangsemittanzen und beträchtliche Strahltransmissionen erzielt. Um einen mono-energetischen und kontinuierlichen Strahl aus einer Ionenquelle für den Einschuss in eine Hochfrequenz-Beschleunigerstruktur anzupassen, wird eine Energiemodulation benötigt, die im weiteren Verlauf (Driftstrecke) zur Längsfokussierung des Strahls führt. Durch eine Sägezahnwellenform wird die ideale Energiemodulation aufgrund der linearen Abhängigkeit zwischen der Energie der Teilchen und ihren relativen Phasen erreicht. Dies ist jedoch technologisch nicht möglich, da Teilchenbeschleuniger Spannungsniveaus im Bereich kV bis 100 kV benötigen. Dagegen ist für eine solche Zielsetzung eine räumliche Trennung der sinusförmigen Anregung mit der Grundfrequenz und höheren Harmonischen möglich.
Daher wurde in dieser Arbeit ein verbesserter harmonischer Buncher, der sogenannte „Double Drift Harmonic Buncher - DDHB“ entwickelt, welcher zahlreiche Vorteile hat. Eine geringe longitudinale Emittanz sowie finanzielle Aspekte sprechen für diesen Lösungsansatz. Die Hauptelemente eines DDHB Systems sind zwei Kavitäten, die durch eine Driftlänge L1 getrennt sind, wobei der erste Resonator mit der Grundfrequenz bei -90° synchroner Phase und angelegter Spannung V1 und der zweite Resonator bei der zweiten harmonischen Frequenz mit +90 synchroner Phase und angelegter Spannung V2 betrieben werden. Schließlich ist eine zweite Drift L2 am Ende des Arrays für eine longitudinale Strahlfokussierung am Hauptbeschleunigereingang erforderlich. Somit erfüllt ein solcher Aufbau das angestrebte Ziel einer hohen Einfangseffizienz und einer kleinen longitudinalen Emittanz durch Anpassen der vier Designparameter V1, L1, V2 und L2.
Das Verständnis der Fokussierung, ausgehend von einem Gleichstromstrahl, einschließlich der Raumladungskräfte, ist einer der wesentlichen Bestandteile der Strahlphysik. Viele kommerzielle Codes bieten Simulationsmöglichkeiten in diesem Anwendungsbereich. Ihre Ansätze bleiben jedoch dem Anwender meist verborgen, oder es fehlen wichtige Details zur genauen Abbildung des vorliegenden Konzepts. Daher bestand eine Hauptaufgabe dieser Arbeit darin, einen speziellen Multi-Particle-Tracking-Beam-Dynamics-Code (BCDC) zu entwickeln, bei dem der Raumladungseffekt während des Bunch-Vorgangs, ausgehend von einem DC-Strahl berechnet wird. Der BCDC - Code enthält elementare Routinen wie Drift und Beschleunigungsspalt oder magnetische Linse für die transversale Strahlfokussierung und Raumladungsberechnungen unter Berücksichtigung der Auswirkungen der nächsten Nachbar-Bunche (NNB). Der Raumladungsalgorithmus in BCDC basiert auf einer direkten Coulomb- Gitter-Gitter-Wechselwirkung und Berechnungen des elektrischen Feldes durch Lokalisierung der Ladungsdichte auf einem kartesischen Gitter. Um Genauigkeit zu erreichen, werden die Feldberechnungen in Längsrichtung symmetrisch um das zentrale Bucket (βλ-Größe) erweitert, so dass das Simulationsfeld dreimal so groß ist. Die zentrale Teilchenverteilung wird dann nach jedem Schritt in die benachbarten Buckets kopiert. Anschließend werden die resultierenden Felder im Hauptgitterfeld neu berechnet, indem die elektrischen Felder im Hauptgitterfeld mit denen aus den benachbarten Regionen überlagert werden. Ohne diese Methode würde z. B. ein kontinuierlicher Strahl, welcher jedoch in der Simulation nur innerhalb einer Zelle der Länge βλ definiert ist, zu einer resultierenden Raumladungsfeldkomponente Ez an beiden Rändern der Zelle führen. Ein solches unphysikalisches Ergebnis konnte durch die Anwendung der NNB-Technik bereits weitgehend eliminiert werden. Zusätzlich zum NNB-Feature verfügt das BCDC über eine weitere Besonderheit nämlich die sogenannte Raumladungskompensation (SCC). Aufgrund der Ionisierung des Restgases kommt es entlang des Niederenergiestrahltransports zu einer teilweisen Raumladungskompensation, und zwar am und hinter dem Bunchersystem mit unterschiedlichen Prozentsätzen. Eines der Hauptziele des DDHB-Konzepts besteht darin, es für Hochstromstrahlanwendungen zu entwickeln. Dabei ermöglicht die teilweise Raumladungskompensation, dass das Design in der Praxis höhere Stromniveaus erreicht. Dadurch ist das BCDC-Programm ein leistungsstarkes Werkzeug für Simulationen in künftigen, stromstarken Projekten. Proof-of-Principle-Designs wurden in dieser Arbeit entwickelt.
In this thesis, we use lattice QCD to study a part of the QCD phase diagram, specifically the QCD phase transition at mu=0, where the QCD matter changes from hadron gas to quark-gluon plasma (QGP) with increasing temperature.
This phase transition takes place as a crossover, but when theoretically changing the masses of the quarks, the order of the phase transition changes as well.
We focus on the region of heavy quark masses with Nf=2 flavours, where we investigate the critical quark mass at the second order phase transition in the form of a Z2 point between the first-order and the crossover region.
The first-order region is positioned at infinitely heavy quarks. As the quark masses decrease, the associated Z3 centre symmetry breaks explicitly, causing the first-order phase transition to weaken until it turns into the Z2 point and finally into a crossover.
We study this Z2 point using simulations at Nf=2 and lattices of the sizes Nt = {6, 8, 10, 12}, partially building on previous work, in which the simulations for Nt = {6, 8, 10} were started.
The simulations for Nt=12 are not finished yet though, but we were able to draw some preliminary conclusions. These simulations are run on GPUs and CPUs, using the codes Cl2QCD and open-QCD-FASTSUM, respectively. Afterwards, the data goes through a first analysis step in the form of the Python program PLASMA, preparing it for the two techniques we use to analyse the nature of the phase transition.
As a first, reliable analysis method, we perform a finite size scaling analysis of the data to find the location of the Z2 point. Since we are using lattice QCD, performing a continuum extrapolation is necessary to reach the continuum result.
In regard to this, the finite size scaling analysis method is hampered by the excessive amount of simulated data that is needed regarding statistics and the total number of simulations, which is why this thesis is only an intermediate step towards the continuum limit.
This also leads to the second analysis technique we explore in this thesis.
We start to design a Landau theory which describes the phase boundary for heavy masses at Nf=2 based on the simulated data.
We develop a Landau functional for every Nt we have simulation data for.
Albeit the results are not at the same precision as the ones from the finite size scaling analysis, we are able to reproduce the position of the Z2 point for every Nt.
Even though we are not able to take a continuum extrapolation right now, after more development takes place in future works, this approach might, in the long run, lead to a continuum result that won't need as many simulations as the finite size scaling analysis.
Precise tune determination and split beam emittance reconstruction at the CERN PS synchrotron
(2023)
In accelerator physics, the need to improve the performance and better control the operating point of an accelerator has become, year after year, an increasingly important need in order to achieve higher energies and brightness, as well as point-like particle beams. If this involves increasingly advanced technological developments (in terms, for example, of materials for more intense superconducting magnets), it can not take place in the absence of targeted studies of linear and non-linear beam dynamics. In the context of this Ph.D. thesis in physics, linear and non-linear dynamics of charged particles in circular accelerators is the topic that will be discussed and treated in detail. In particular, the presentation and discussion of the results will be divided in two main topics: the need to know the physical properties of a proton beam; and the development of innovative methods to determine and study the accelerator’s working point. With regard to the first topic, an innovative procedure will be presented to determine the transverse size of the PS beam in the beam extraction phase. Among the different ways the extraction occurs at the PS, the analysed one is based on the transverse splitting of the beam by means of non-linear fields. Thus, the knowledge of the transverse beam size is not trivial since resonant linear and non-linear beam structures (namely, core and islands) arise and, for each of them, the beam size has to be quantified. This parameter is crucial for two main reasons: the accelerator that will receive the beam exiting the upstream accelerator may have restrictions (physical or magnetic) that involve a partial or total loss of the incoming beam; and any experiments located downstream of the considered accelerator may need a beam with a transversal size as constant as possible; consequently, its monitoring and control are essential. The second topic concerns the accurate determination of the working point of an accelerator, defined as the number of transverse oscillations the particle beam travels per unit of accelerator circumference, both horizontally and vertically. This quantity is called horizontal and vertical tune, respectively. Their knowledge is also crucial to understand whether the beam will be stable or unstable. In fact, not all tune values are acceptable, as there are particular values that bring the beam into resonance. In this configuration, the amplitude of the transverse oscillations of the particles increases in an uncontrolled manner and leads to the loss of all or part of the beam. Note that, in particular operating conditions, the resonant conditions are sought and desired to model, in a suitable way, the transversal shape of the beam, such as the above mentioned PS extraction scheme. It is even clearer how much the determination of the machine working point is essential to determine the operating conditions of an accelerator. In this context, several methods (also taken from the field of applied mathematics) to calculate the tune will be demonstrated and tested numerically on different types of synthetic signals. At the end of this description, the use of experimental data will allow to obtain the benchmark of a new method for the direct calculation of some characteristic quantities of non-linear beam dynamics (namely, the amplitude detuning, i.e. the variation of tune as a function of intensity of the perturbation provided to the beam.
Precise intensity monitoring at CRYRING@ESR: on designing a Cryogenic Current Comparator for FAIR
(2023)
In the field of today’s beam intensity diagnostic there is a significant gap in the non-interceptive, calibrated measurement of the absolute intensity of continuous (unbunched) dc beams with current amplitudes below 1 μA. At the Facility for Antiproton and Ion Research (FAIR) low-intensity DC beams will occur during slow extraction from the synchrotrons as well as for coasting beams of highly-charged or exotic nuclei in the storage rings. The lack of adequate beam instrumentation limits the experimental program as well as the accuracy of experimental results.
The Cryogenic Current Comparator (CCC) can close the diagnostic gap with a high-precision dc current reading independent of ion-species and of beam parameters. However, the established detector design based on a core with high magnetic permeability and on a radial shield geometry has well-known weaknesses concerning magnetic shielding efficiency and intrinsic current noise. To eliminate these weaknesses, a novel coreless CCC with a co-axial shield was constructed and combined with a high-performance SQUID contributed by the Leibniz-Institute of Photonic Technology (Leibniz-IPHT Jena). The new axial CCC model was compared to a radial CCC with the established design provided by the Friedrich-Schiller-University Jena. According to numerical simulations prepared at TU Darmstadt and test measurements of the detectors in the laboratory, the new design offered a significant improvement of the shielding factor – from 75dB to 207dB at the required dimensions – and eliminated all noise contributions from the core material, promising an improved current resolution. Although the lower inductance of the pickup coil reduced the coupling to the beam significantly, the noise properties of the new CCC type were comparable to the classical version with a high-permeability core. However, the expected decrease of the low-frequency noise and thus an increase of the current resolution could not be observed at this stage of development.
Consequently, the classical CCC based on the radial shielding and high-permeability core had to be installed in CRYRING@ESR to provide best possible intensity measurements for the upcoming experimental campaign. In CRYRING the CCC was operated with beam currents between 1nA and 20μA and with different ion species (H, Ne, O, Pb, U). It was shown that the CCC provides a noise-limited current resolution of better than 3.2 nArms at a bandwidth of 200 kHz as well as a noise level below 40 pA/√Hz above 1 kHz. During the operation, the main noise sources of the accelerator environment had to be identified and suitable mitigation strategies were developed. Temperature and pressure fluctuations were suppressed with a newly-designed cryogenic support system based on a 70 l helium bath cryostat, developed and built in collaboration with the Institut für Luft- und Kältetechnik Dresden, in combination with a helium re-liquefier. The cryogenic operating time was restricted to around 7 days, which must be expanded significantly in the future. Digital filters were developed to remove the perturbations of the helium liquefier and of the neighboring dipole magnets. Given the promising results the CCC system can be considered as a prototype for future CCCs at FAIR.
This thesis deals with several aspects of non-perturbative calculations in low-dimensional quantum field theories. It is split into two main parts:
The first part focuses on method development and testing. Using exactly integrable QFTs in zero spacetime dimensions as toy models, the need for non-perturbative methods in QFT is demonstrated. In particular, we focus on the functional renormalization group (FRG) as a non-perturbative exact method and present a novel fluid-dynamic reformulation of certain FRG flow equations. This framework and the application of numerical schemes from the field of computational fluid dynamics (CFD) to the FRG is tested and benchmarked against exact results for correlation functions. We also draw several conclusions for the qualitative understanding and interpretation of renormalization group (RG) flows from this fluid-dynamic reformulation and discuss the generalization of our findings to realistic higher-dimensional QFTs.
The topics discussed in the second part are also manifold. In general, the second part of this thesis deals with the Gross-Neveu (GN) model, which is a prototype of a relativistic QFT. Even though being a model in two spacetime dimensions, it shares many features of realistic models and theories for high-energy particle physics, but also emerges as a limiting case from systems in solid state physics. Especially, it is interesting to study the model at non-vanishing temperatures and densities, thus, its thermodynamic properties and phase structure.
First, we use this model to test and apply our findings of the first part of this thesis in a realistic environment. We analyze how the fluid-dynamic aspects of the FRG realize themselves in the RG flow of a full-fledged QFT and how we profit from this numeric framework in actual calculations. Thereby, however, we also aim at answering a long-standing question: Is there still symmetry breaking and condensation at non-zero temperatures in the GN model, if one relaxes the commonly used approximation of an infinite number of fermion species and works with a finite number of fermions? In short: Is matter (in the GN model) in a single spatial dimension at non-zero temperature always gas-like?
In general, we also use the GN model to learn about the correct description of QFTs at non-zero temperatures and densities. This is of utmost relevance for model calculations in low-energy quan- tum chromodynamics (QCD) or other QFTs in medium and we draw several conclusions for the requirements for stable calculations at non-zero chemical potential.
Investigation of the kinematics involved in compton scattering and hard X-ray photoabsorption
(2023)
The present work investigates the kinematics of Compton scattering at gaseous, internally-cool helium and molecular nitrogen targets in the high- and the low-energy regime. Additionally, photoionization at molecular nitrogen with high-energy photons is investigated. These exeprimental regimes were previously inaccessible due to the extremely small cross sections involved. Nowadays, the third- and fourth-generation synchrotron machines produce sufficient photon flux, enabling the investiagtion of the above processes. The utilized cold-target recoil-ion momentum spectroscopy (COLTRIMS) technique further increases the detection efficiency of the observed processes, since it enables full-solid-angle detection by exploiting momentum conservation.
Compton scattering is investigated at both high (helium and N2) and low (helium) photon energies. In the high-energy regime, the impulse approximation is mostly valid, which is not the case for the low-energy regime. The impulse approximation assumes that the Compton-scattering process takes place at a free electron with a momentum distribution as if it was bound, thus ignoring the binding energy of the system. In the low-energy regime, the impulse approximation is not valid.
Photoionization is investigated at high photon energies, where the linear momentum of the photon cannot be neglected, as is the fashion of the commonly used dipole approximation.
Magnetische Quadrupole und Solenoide sind ein elementarer Bestandteil einer Beschleunigeranlage und begrenzen die transversale Ausdehnung eines Teilchenstrahls durch eine Reflexion der Teilchen in Richtung der Beschleunigerachse. Die konventionelle Bauweise als Elektromagnet besteht aus einem Eisenjoch welches mit Spulen umwickelt ist. In dieser Arbeit werden diese Magnetstrukturen auf Basis von Permanentmagneten designt und hinsichtlich ihrer Qualität zum Strahltransport optimiert, sowie Feldmessungen an permanentmagnetischen Quadrupolen durchgeführt. Diese wurden mit 3D-gedruckten Halterungen aus Kunststoff gefertigt, was eine Vielzahl von Formvariationen ermöglicht. Darauf aufbauend wurde ein im Vakuum befindlicher Aufbau entwickelt, mit welchem die Strahlenvelope im inneren eines permanentmagnetischen Quadrupol Tripletts diagnostiziert werden kann. Dies greift auf ein am Institut für angewandte Physik entwickeltes System zur nicht-invasiven Strahldiagnose mithilfe von Raspberry Pi Einplatinencomputern und Kameras in starken Magnetfeldern zurück.
Die in dieser Arbeit vorgestellte Konfiguration eines PMQ’s ist eine Weiterentwicklung des am CERN im Linac4, einem Alvarez-Driftröhrenbeschleuniger zur Beschleunigung von H– , verwendeten Designs. Bei diesem sind je acht quaderförmige Permanentmagnete aus Samarium Cobalt (SmCo) in die Driftröhren des Beschleunigers integriert.
Darauf aufbauend wurden die geometrischen Designparameter hinsichtlich ihres Einflusses auf die Qualität des Magnetfelds untersucht. In einem magnetischen Quadrupol zur Strahlfokussierung wird dies durch einen linearen Anstieg des Magnetfeldes von Quadrupolachse zu Polflächen charakterisiert. Das Design wurde im Zuge dessen zur Verwendung von industriellen Standardgeometrien von Quadermagneten und der Erhöhung der magnetischen Flussdichte erweitert. Dazu wurde untersucht wie sich das Hinzufügen von zusätzlichen Magneten auswirkt und ob eine bessere Feldqualität durch andere Magnetformen erreicht wird.
Die Kombination mehrerer PMQ in geringem Abstand (<10 mm) führt abhängig von der Geometrie der PMQ-Singlets zu einer erheblichen Verschlechterung der Feldlinearität, was eine Erhöhung des besetzten Phasenraumvolumens der Teilchen nach sich zieht.
Am Beispiel von PMQ-Tripletts werden die zu beachtenden Designparameter analysiert und Lösungsansätze vorgestellt. Die auftretenden Effekte werden anhand von Strahldynamiksimulation veranschaulicht. Für eine Anwendung der vorgestellten Designs wurde eine Magnethülle mit einer Wabenstruktur zur Aufnahme der Einzelmagnete entwickelt. Diese besteht aus zwei Halbschalen, welche jeweils den Kompletteinschluss aller Magnete garantiert und eine einfache Montage um ein Strahlrohr ermöglicht. Diese wurden in der Institutswerkstatt aus Kunststoff via 3D-Druck gefertigt. Aufgrund der höheren erreichbaren Magnetisierung wurden Neodym-Eisen-Bor-Magnete (Nd2F14B, Br =1,36 T) für den Bau der entwickelten Strukturen verwendet. Für eine Magnetfeldmessung zur Bestätigung der magnetostatischen Simulationen und einer Bewertung der Druckqualität wurde eine motorisierte xyz-Stage zur Bewegung einer Hallsonde aufgebaut. Die Messungen zeigen eine gute Zentrierung des Magnetfeldes, sodass PMQ mit einer Kunststoffhalterung eine schnelle und billige Möglichkeit sind, kurzfristig eine Quadrupol-Konfiguration aufzubauen. Die Kosten belaufen sich für einen einzelnen PMQ je nach Länge auf 50€ bis 100€.
Basierend auf der PMQ-Struktur wurde ein PMQ-Triplett in ein Vakuum versetzt und mit Raspberry Pi Kameras im Zwischenraum der Singlets ausgestattet. Dies ermöglichte die Aufnahme der Strahlenvelope innerhalb des Tripletts anhand der durch einen Heliumstrahl induzierten Fluoreszenz und erste Erkenntnisse für notwendige Weiterentwicklungen wurden gesammelt. Auf den genauen technischen Aufbau wird im abschließenden Kapitel der Arbeit detailliert eingegangen.
In der einfachsten Form wird ein PM-Solenoid anhand eines einzelnen axial magnetisierten Hohlzylinders realisiert und erzeugt näherungsweise die Feldverteilung einer Zylinderspule. Durch die radialen Magnetfeldkomponenten an den Rändern des Solenoiden erhalten Teilchen eine tangentiale Geschwindigkeitskomponente und führen eine Gyrationsbewegung entlang der Solenoidachse aus. Diese reduziert den Strahlradius und die Teilchen behalten eine Geschwindigkeitskomponente, welche zur Solenoidachse zeigt. Für eine Maximierung dieser Fokussierung muss das Magnetfeld auf die Zylinderachse konzentriert werden. Insbesondere bei einer Verlängerung des Hohlzylinders wird die Kopplung der Polflächen über das Innenvolumen abgeschwächt. Aufgrund dessen wurde ein Design bestehend aus drei Hohlzylindersegmenten entwickelt. Dieses setzt sich aus zwei radial und einem axial magnetisierten Hohlzylinder zusammen und erhöht die mittlere magnetische Flussdichte für ausgewählte Geometrien um einen Faktor zwei im Vergleich zu einem einzelnen Hohlzylinder gleicher Geometrie. Dies ist gleichzusetzen mit einer Vervierfachung der Fokussierstärke, welche quadratisch mit der mittleren magnetischen Flussdichte skaliert. Die Strahldynamischen Konsequenzen werden anhand von Simulationen mit generierten Magnetfeldverteilungen erläutert. Für eine kostengünstige Bauweise wurde eine Design basierend auf quaderförmigen Magneten entwickelt.
Es wurde das Leitfähigkeitsverhalten von reinem, lufthaltigem Wasser bei kontinuierlicher und impulsgetasteter Röntgenbestrahlung (60 kV8) untersucht. Hierbei ergaben sich zwei einander überlagerte Effekte: 1. Ein der Röntgen-Dosisleistung proportionaler irreversibler Leitfähigkeitsanstieg, der vermutlich auf eine Strahlenreaktion des gelösten CO2 zurückzuführen ist, 2. eine reversible Leitfähigkeitserhöhung während der Bestrahlung, die sich mit der Entstehung einer Ionenart mit einer mittleren Lebensdauer von ca. 0,15 sec erklären läßt. Es wird angenommen, daß es sich dabei um Radikalionen O2⊖ handelt, welche durch die Reaktion der als Strahlungsprodukt entstehenden Η-Radikale mit dem gelösten Sauerstoff gebildet werden. Ein möglicher chemischer Reaktionsmechanismus wird angegeben, der zu befriedigender quantitativer Übereinstimmung der Versuchsergebnisse mit Ausbeutewerten und Reaktionskonstanten aus der Literatur führt.
Neutron stars are unique laboratories for the investigation of the high density properties of bulk matter. In this work, the astrophysical constraints for a phase transition from hadronic matter to deconfined quark matter are examined thoroughly. A scheme for relating known astrophysical observables such as mass, radius and tidal deformability to the parameter space of such a transition is devised and applied to the set of data currently available.
In order to span a wide parameter space, a highly parameterizable relativistic mean field equation in compliance with chiral effective field theory results is used, where the stiffness of the equation of state can be varied via the effective mass at saturation density. The phase transitions are modelled using a Maxwell construction and assumed to be of first order, with a constant speed of sound quark matter model. The resulting equations of state are analyzed and divided into four categories, which can be used to constrain the parameter space that allows phase transition. It is highlighted, that a subset of this parameter space would even be detectable without the need of higher precision measurements. A phase transition at high densities is shown to be particularly promising in this regard. Finally, the groundwork is laid to apply the equation of state used in this work for supernova or merger simulations, by extending it to non-zero temperatures.
In order to understand the origin of the elements in the universe, one must understand the nuclear reactions by which atomic nuclei are transformed. There are many different astrophysical environments that fulfill the conditions of different nucleosynthesis processes. Even though great progress has been made in recent decades in understanding the origin of the elements in the universe, some questions remain unanswered. In order to understand the processes, it is necessary to measure cross sections of the involved reactions and constrain theoretical model predictions. A variety of methods have been developed to measure nuclear reaction cross sections relevant for nuclear astrophysics. In this thesis, two different experiments and their results, both using the well-established activation method, are presented.
A measurement of the proton capture cross section on the p-nuclide 96Ru was performed at the Institute of Structure and Nuclear Astrophysics ISNAP - Notre Dame, USA. The main goal of this experiment was to compare the results with those obtained by Mei et al. in a pioneering experiment using the method of inverse kinematics at the GSI Helmholtzzentrum für Schwerionenforschung GmbH - Darmstadt, Germany. Therefore, the activations were taken out at the same center of mass energies of 9 MeV, 10 MeV and 11 MeV. Another activation was taken out at an energy of 3.2 MeV to compare the result to a measurement of Bork et al. who also used the activation method. While the results at 3.2 MeV agree quite well with those of Bork et al., the results at higher energies show significantly smaller cross sections than those measured by Mei et al.. Experimental details, the data analysis and sources of uncertainties are discussed.
The second part of this thesis describes a neutron capture cross section experiment. At the Institut für Kernphysik - Goethe Universtität Frankfurt an experimental setup allows to produce quasi maxwell-distributed neutron fields to measure maxwell-averaged cross sections (MACS) relevant for s-process nucleosynthesis. The setup was upgraded by a fast electric linear guide to transport samples from the activation to the detection site. The cyclic activation of the sample allows to increase the signal-to-noise ratio and to measure neutron captures that lead to nuclei with
half-lives on the order of seconds. In a first campaign, MACS of the reactions 51V(n,γ), 107,109Ag(n,γ) and 103Rh(n,γ) were measured. The new components of the setup aswell as the data analysis framework are described and the results of the measurements are discussed.
We study the polarization of relativistic fluids using the relativistic density operator at global and local equilibrium. In global equilibrium, a new technique to compute exact expectation values is introduced, which is used to obtain the exact polarization vector for fields of any spin. The same result has been extended to the case of massless fields. Furthermore, it is demonstrated that at local equilibrium not only the thermal vorticity but also the thermal shear contribute to the polarization vector. It is shown that assuming an isothermal local equilibrium, the new term can solve the polarization sign puzzle in heavy ion collisions.