Institute
Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (215) (entfernen)
Volltext vorhanden
- ja (215) (entfernen)
Gehört zur Bibliographie
- nein (215)
Schlagworte
- Beschleuniger (3)
- HADES (3)
- Heavy Ion Collisions (3)
- CBM (2)
- CBM Experiment (2)
- CBM experiment (2)
- Control System (2)
- Dissertation (2)
- EPICS (2)
- FAIR (2)
Institut
- Physik (215)
- MPI für Biophysik (1)
Most elements heavier than iron are synthesized in stars during neutron capture reactions in the r- and s-process. The s-process nucleosynthesis is composed of the main and weak component. While the s-process is considered to be well understood, further investigations using nucleosynthesis simulations rely on measured neutron capture cross sections as crucial input parameters. Neutron capture cross sections
relevant for the s-process can be measured using various experimental methods. A prominent example is the activation method relying on the 7Li(p,n)7Be reaction as a neutron source, which has the advantage of high neutron intensities and is able to create a quasi-stellar neutron spectrum at kBT = 25 keV. Other neutron sources able to provide quasi-stellar spectra at different energies suffer from lower neutron intensities. Simulations using the PINO tool suggest the neutron activation of samples with different neutron spectra, provided by the 7Li(p,n)7Be reaction, and a subsequent linear combination of the obtained spectrum-averaged cross sections
to determine the Maxwellian-averaged cross section (MACS) at various energies of astrophysical relevance. To investigate the accuracy of the PINO tool at proton energies between the neutron emission threshold at Ep = 1880.4 keV and 2800 keV,
measurements of the 7Li(p,n)7Be neutron fields are presented, which were carried out at the PTB Ion Accelerator Facility at the Physikalisch-Technische Bundesanstalt in Braunschweig. The neutron fields of ten different proton energies were measured.
The presented neutron fields show a good agreement at proton energies Ep = 1887, 1897, 1907, 1912 and 2100 keV. For the other proton energies, E p = 2000, 2200, 2300, 2500, and 2800 keV, differences between measurement and simulation were found and discussed. The obtained results can be used to benchmark and adapt the PINO tool and provide crucial information for further improvement of the neutron activation method for astrophysics.
An application for the 7Li(p,n)7Be neutron fields is presented as an activation experiment campaign of gallium, an element that is mostly produced during the weak s-process in massive stars. The available cross section data for the 69,71Ga(n,γ)
reactions, mostly determined by activation measurements, show differences up toa factor of three. To improve the data situation, activation measurements were carried out using the 7Li(p,n)7Be reaction. The neutron capture cross sections for
a quasi-stellar neutron spectrum at kBT = 25 keV were determined for 69Ga and 71Ga.
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
Spin waves in yttrium-iron garnet has been the subject of research for decades. Recently the report of Bose-Einstein condensation at room temperature has brought these experiments back into focus. Due to the small mass of quasiparticles compared to atoms for example, the condensation temperature can be much higher. With spin-wave quasiparticles, so-called magnons, even room temperature can be reached by externally injecting magnons. But also possible applications in information technologies are of interest. Using excitations as carriers for information instead of charges delivers a much more efficient way of processing data. Basic logical operations have already been realized. Finally the wavelength of spin waves which can be decreased to nanoscale, gives the opportunity to further miniaturize devices for receiving signals for example in smartphones.
For all of these purposes the magnon system is driven far out of equilibrium. In order to get a better fundamental understanding, we concentrate in the main part of this thesis on the nonequilibrium aspect of magnon experiments and investigate their thermalization process. In this context we develop formalisms which are of general interest and which can be adopted to many different kinds of systems.
A milestone in describing gases out of equilibrium was the Boltzmann equation discovered by Ludwig Boltzmann in 1872. In this thesis extensions to the Boltzmann equation with improved approximations are derived. For the application to yttrium-iron garnet we describe the thermalization process after magnons were excited by an external microwave field.
First we consider the Bose-Einstein condensation phenomena. A special property of thin films of yttrium-iron garnet is that the dispersion of magnons has its minimum at finite wave vectors which leads to an interesting behavior of the condensate. We investigate the spatial structure of the condensate using the Gross-Pitaevskii equation and find that the magnons can not condensate only at the energy minimum but that also higher Fourier modes have to be occupied macroscopically. In principle this can lead to a localization on a lattice in real space.
Next we use functional renormalization group methods to go beyond the perturbation theory expressions in the Boltzmann equation. It is a difficult task to find a suitable cutoff scheme which fits to the constraints of nonequilibrium, namely causality and the fluctuation-dissipation theorem when approaching equilibrium. Therefore the cutoff scheme we developed for bosons in the context of our considerations is of general interest for the functional renormalization group. In certain approximations we obtain a system of differential equations which have a similar transition rate structure to the Boltzmann equation. We consider a model of two kinds of free bosons of which one type of boson acts as a thermal bath to the other one. Taking a suitable initial state we can use our formalism to describe the dynamics of magnons such that an enhanced occupation of the ground state is achieved. Numerical results are in good agreement with experimental data.
Finally we extend our model to consider also the pumping process and the decrease of the magnon particle number till thermal equilibrium is reached again. Additional terms which explicitly break the U(1)-symmetry make it necessary to also extend the theory from which a kinetic equation can be deduced. These extensions are complicated and we therefore restrict ourselves to perturbation theory only. Because of the weak interactions in yttrium-iron garnet this provides already good results.
The miniaturization of electronics is reaching its limits. Structures necessary to build integrated circuits from semiconductors are shrinking and could reach the size of only a few atoms within the next few years. It will be at the latest at this point in time that the physics of nanostructures gains importance in our every day life. This thesis deals with the physics of quantum impurity models. All models of this class exhibit an identical structure: the simple and small impurity only has few degrees of freedom. It can be built out of a small number of atoms or a single molecule, for example. In the simplest case it can be described by a single spin degree of freedom, in many quantum impurity models, it can be treated exactly. The complexity of the description arises from its coupling to a large number of fermionic or bosonic degrees of freedom (large meaning that we have to deal with particle numbers of the order of 10^{23}). An exact treatment thus remains impossible. At the same time, physical effects which arise in quantum impurity systems often cannot be described within a perturbative theory, since multiple energy scales may play an important role. One example for such an effect is the Kondo effect, where the free magnetic moment of the impurity is screened by a "cloud" of fermionic particles of the quantum bath.
The Kondo effect is only one example for the rich physics stemming from correlation effects in many body systems. Quantum impurity models, and the oftentimes related Kondo effect, have regained the attention of experimental and theoretical physicists since the advent of quantum dots, which are sometimes also referred to as as artificial atoms. Quantum dots offer a unprecedented control and tunability of many system parameters. Hence, they constitute a nice "playground" for fundamental research, while being promising candidates for building blocks of future technological devices as well.
Recently Loss' and DiVincenzo's p roposal of a quantum computing scheme based on spins in quantum dots, increased the efforts of experimentalists to coherently manipulate and read out the spins of quantum dots one by one. In this context two topics are of paramount importance for future quantum information processing: since decoherence times have to be large enough to allow for good error correction schemes, understanding the loss of phase coherence in quantum impurity systems is a prerequisite for quantum computation in these systems. Nonequilibrium phenomena in quantum impurity systems also have to be understood, before one may gain control of manipulating quantum bits.
As a first step towards more complicated nonequilibrium situations, the reaction of a system to a quantum quench, i.e. a sudden change of external fields or other parameters of the system can be investigated. We give an introduction to a powerful numerical method used in this field of research, the numerical renormalization group method, and apply this method and its recent enhancements to various quantum impurity systems.
The main part of this thesis may be structured in the following way:
- Ferromagnetic Kondo Model,
- Spin-Dynamics in the Anisotropic Kondo and the Spin-Boson Model,
- Two Ising-coupled Spins in a Bosonic Bath,
- Decoherence in an Aharanov-Bohm Interferometer.
Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke.
Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen.
Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten.
Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden.
The ab-initio molecular dynamics framework has been the cornerstone of computational solid state physics in the last few decades. Although it is already a mature field it is still rapidly developing to accommodate the growth in solid state research as well as to efficiently utilize the increase in computing power. Starting from the first principles, the ab-initio molecular dynamics provides essential information about structural and electronic properties of matter under various external conditions. In this thesis we use the ab-initio molecular dynamics to study the behavior of BaFe2As2 and CaFe2As2 under the application of external pressure. BaFe2As2 and CaFe2As2 belong to the family of iron based superconductors which are a novel and promising superconducting materials. The application of pressure is one of two key methods by which electronic and structural properties of iron based superconductors can be modified, the other one being doping (or chemical pressure). In particular, it has been noted that pressure conditions have an important effect, but their exact role is not fully understood. To better understand the effect of different pressure conditions we have performed a series of ab-initio simulations of pressure application. In order to apply the pressure with arbitrary stress tensor we have developed a method based on the Fast Inertial Relaxation Engine, whereby the unit cell and the atomic positions are evolved according to the metadynamical equations of motion. We have found that the application of hydrostatic and c axis uniaxial pressure induces a phase transition from the magnetically ordered orthorhombic phase to the non-magnetic collapsed tetragonal phase in both BaFe2As2 and CaFe2As2. In the case of BaFe2As2, an intermediate tetragonal non-magnetic tetragonal phase is observed in addition. Application of the uniaxial pressure parallel to the c axis reduces the critical pressure of the phase transition by an order of magnitude, in agreement with the experimental findings. The in-plane pressure application did not result in transition to the non-magnetic tetragonal phase and instead, rotation of the magnetic order direction could be observed. This is discussed in the context of Ginzburg-Landau theory. We have also found that the magnetostructural phase transition is accompanied by a change in the Fermi surface topology, whereby the hole cylinders centered around the Gamma point disappear, restricting the possible Cooper pair scattering channels in the tetragonal phase. Our calculations also permit us to estimate the bulk moduli and the orthorhombic elastic constants of BaFe2As2 and CaFe2As2.
To study the electronic structure in systems with broken translational symmetry, such as doped iron based superconductors, it is necessary to develop a method to unfold the complicated bandstructures arising from the supercell calculations. In this thesis we present the unfolding method based on group theoretical techniques. We achieve the unfolding by employing induced irreducible representations of space groups. The unique feature of our method is that it treats the point group operations on an equal footing with the translations. This permits us to unfold the bandstructures beyond the limit of translation symmetry and also formulate the tight-binding models of reduced dimensionality if certain conditions are met. Inclusion of point group operations in the unfolding formalism allows us to reach important conclusions about the two versus one iron picture in iron based superconductors.
And finally, we present the results of ab-initio structure prediction in the cases of giant volume collapse in MnS2 and alkaline doped picene. In the case of MnS2, a previously unobserved high pressure arsenopyrite structure of MnS2 is predicted and stability regions for the two competing metastable phases under pressure are determined. In the case of alkaline doped picene, crystal structures with different levels of doping were predicted and used to study the role of electronic correlations.
The PANDA experiment will be one of the flagship experiments at the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. It is a versatile detector dedicated to topics in hadron physics such as charmonium spectroscopy and nucleon structure. A DIRC counter will deliver hadronic particle identification in the barrel part of the PANDA target spectrometer and will cleanly separate kaons with momenta up to 3.5 GeV/c from a large pion background. An alternative DIRC design option, using wide Cherenkov radiator plates instead of narrow bars, would significantly reduce the cost of the system. Compact fused silica photon prisms have many advantages over the traditional stand-off boxes filled with liquid. This work describes the study of these design options, which are important advancements of the DIRC technology in terms of cost and performance. Several new reconstruction methods were developed and will be presented. Prototypes of the DIRC components have been built and tested in particle beam, and the new concepts and approaches were applied. An evaluation of the performance of the designs, feasibility studies with simulations, and a comparison of simulation and prototype tests will be presented.
An investigation of photoelectron angular distributions and circular dichroism of chiral molecules
(2021)
The present work demonstrates the capability of several type of molecular frame photoelectron angular distributions (MFPADs) and their linked chiroptical phenomenon the photoelectron circular dichroism (PECD) to map in great detail the molecular geometry of polyatomic chiral molecules as a function of photoelectron energy. To investigate the influence of the molecular potential on the MFPADs, two chiral molecules were selected, namely 2-(methyl)oxirane (C3H6O, MOx, m = 58,08 uma) and 2-(trifluoromethyl)oxirane (C3H3F3O, TFMOx, m = 112,03 uma). The two molecules differs in one substitutional group and share an oxirane group where the O(1s) electron was directly photoionized with the use of synchrotron radiation in the soft X-ray regime. The direct photoionization of the K-shell electron is well localized in the molecule and it induces the ejection of two or more electrons; the excited system separates into several charged (and eventually neutral) fragments which undergo Coulomb explosion due to their charges. The electrons and the fragments were detected using the COLd Target Recoil Ion Momentum Spectroscopy (COLTRIMS) and the momentum vectors calculated for each fragment belonging from a single ionization. The former method gives the possibility to post-orient molecules in space, giving access to the molecular frame, thus the MFPAD and its related PECD for multiple light propagation direction.
Stereochemistry (from the Greek στερεο- stereo- meaning solid) refers to chemistry in three dimensions. Since most molecules show a three-dimensional structure (3D), stereochemistry pervades all fields of chemistry and biology, and it is an essential point of view for the understanding of chemical structure, molecular dynamics and molecular reactions. The understanding of the chemistry of life is tightly bounded with major discoveries in stereochemistry, which triggered tremendous technical advancements, making it a flourishing field of research since its revolutionary introduction in late 18th century. In chemistry, chirality is a brunch of stereochemistry which focuses on objects with the peculiar geometrical property of not being superimposable to their mirror-images. The word chirality is derived from the Greek χειρ for “hand”, and the first use of this term in chemistry is usually attributed to Lord Kelvin who called during a lecture at the Oxford University Junior Scientific Club in 1893 “any geometrical figure, or group of points, “chiral”, and say that it has chirality if its image in a plane mirror, ideally realized, cannot be brought to coincide with itself.”. Although the latter is usually considered as the birth of the word chirality, the concept underlying it was already present in several fields of science (above all mathematics), already proving the already multidisciplinary relevance of chirality across many field of science and beyond. Nature shows great examples of chiral symmetry on all scales. Empirically, it is possible to observe it at macroscopic scale (e.g. distribution of rotations of galaxies), down to the microscopic scale (e.g. structure of some plankton species), but it is at the molecular level where the number gets remarkable: most of the pharmaceutical drugs, food fragrances, pheromones, enzymes, amino acids and DNA molecules, in fact, are chiral. Moreover, the concept of chirality goes far beyond the mere spatial symmetry of objects being crucially entangled with the fundamental properties of physical forces in nature. The symmetry breaking, namely the different physical behaviour of a two chiral systems upon the same stimuli, is considered to be one of the best explanation for the long standing questions of homochirality in biological life, and ultimately to the chemical origin of life on Earth as we know it. Our organism shows high enantio-selectivity towards specific compounds ranging from drugs, to fragrances. Over 800 odour molecules commonly used in food and fragrance industries have been identified as chiral and their enantiomeric forms are perceived to have very different smells, as the well-know example of D- and L- limonene. Similarly, responses to pharmaceuticals drugs can be enantiomer specific, and in fact about 60 % the drugs currently on the market are chiral compounds, and nearly 90 % of them are sold as racemates. The same degree of enantio-selectivity is observed in the communications systems of plants and insects. Plants produce lipophilic liquids with high vapour pressure called plant volatiles (PVs) which are synthesized via different enzymes called tarpene synthases that are usually chiral. Chiral molecules and chiral effects have a strong impact on all the fields of science with exciting developments ranging from stereo-selective synthesis based on heterogeneous enantioselective catalysis, to optoelctronics, to photochemical asymmetric synthesis, and chiral surface science, just to cite a few.
Chiral molecules come in two forms called enantiomers. Their almost identical chemical and physical properties continue to pose technical challenges concerning the resolution of racemic mixtures, the determination of the enantiomeric excess, and the direct determination of the absolute configuration of an enantiomer. ...
We discuss aspects of the phase structure of a three-dimensional effective lattice theory of Polyakov loops derived from QCD by strong coupling and hopping parameter expansions. The theory is valid for the thermodynamics of heavy quarks where it shows all qualitative features of nuclear physics emerging from QCD. In particular, the SU(3) pure gauge effective theory also exhibits a first-order thermal deconfinement transition due to spontaneous breaking of its global Z₃ center symmetry. The presence of heavy dynamical quarks breaks this symmetry explicitly and consequently, the transition weakens with decreasing quark mass until it disappears at a critical endpoint. At non-zero baryon density, the effective theory can be evaluated either analytically by the so-called high-temperature expansion which does not suffer from the sign problem, or numerically by standard Monte-Carlo methods due to its mild sign problem. The first part of this work devotes to a systematic derivation of the effective theory up to the 6th order in the hopping parameter κ. This method combined with the SU(3) link update algorithm provides a way to simulate the O(κ⁶) effective theory. The second part involves a study of the deconfinement transition of the pure gauge effective theory, with and without static quarks, at all chemical potentials with help of the high-temperature expansion. Our estimate of the deconfinement transition and its critical endpoint as a function of quark mass and all chemical potentials agrees well with recent Monte-Carlo simulations. In the third part, we investigate the N ſ ∈ {1,2} effective theory with zero chemical potential up to O(κ⁴). We determine the location of the critical hopping parameter at which the first-order deconfinement phase transition terminates and changes to a crossover. Our results for the critical endpoint of the O(κ²) effective theory are in excellent agreement with the determinations from simulations of four-dimensional QCD with a hopping expanded determinant by the WHOT-QCD collaboration. For the O(κ⁴) effective theory, our estimate suggests that the critical quark mass increases as the order of κ-contributions increases. We also compare with full lattice QCD with N ſ = 2 degenerate standard Wilson fermions and thus obtain a measure for the validity of both the strong coupling and the hopping expansion in this regime.
Im Rahmen dieser Arbeit wurde ein Reaktionsmikroskop (REMI) nach dem Messprinzip COLTRIMS (Cold Target Recoil Ion Momentum Spectrometry) neu konstruiert und aufgebaut. Die Leistungsfähigkeit des Experimentaufbaus konnte sowohl in diversen Testreihen als auch anschließend unter realen Messbedingungen an der Synchrotronstrahlungsanlage SOLEIL und am endgültigen Bestimmungsort SQS-Instrument (Small Quantum Systems) des Freie-Elektronen-Lasers European XFEL (X-ray free-electron laser) eindrucksvoll unter Beweis gestellt werden.
Mit der Experimentiertechnik COLTRIMS ist es möglich, alle geladenen Fragmente einer Wechselwirkung eines Projektilteilchens mit einem Targetteilchen mittels zweier orts- und zeitauflösender Detektoren nachzuweisen. In einem Vakuumrezipienten wird die als Molekularstrahl präparierte Targetsubstanz inmitten der Hauptkammer zentral mit einem Projektilstrahl (z.B. des XFEL) zum Überlapp gebracht, sodass dort eine Wechselwirkung stattfinden kann. Bei den entstehenden Fragmenten handelt es sich um positiv geladene Ionen sowie negative geladene Elektronen. Elektrische Felder, erzeugt durch eine Spektrometer-Einheit, sowie durch Helmholtz-Spulen erzeugte magnetische Felder ermöglichen es, die geladenen Fragmente in Richtung der Detektoren zu lenken. Die Orts- und Zeitmessung eines einzelnen Teilchens (z.B. eines Ions) findet in Koinzidenz mit den anderen Teilchen (z.B. weiteren Ionen bzw. Elektronen) statt. Mit dieser Messmethode können die Impulsvektoren und Ladungszustände aller geladenen Fragmente in Koinzidenz gemessen werden. Da hierbei die geometrische Anordnung der einzelnen Komponenten für die Leistungsfähigkeit des Experiments eine entscheidende Rolle spielt, mussten bei der Neukonstruktion des COLTRIMS-Apparates für den Einsatz an einem Freie-Elektronen-Laser (FEL) einige Rahmenbedingungen erfüllt werden. Besonders wurden die hohen Vakuumvoraussetzungen an den Experimentaufbau aufgrund der enormen Lichtintensität eines FEL beachtet. Das Zusammenspiel der vielen Einzelkomponenten konnte zunächst in mehreren Testreihen überprüft werden. Unter anderem durch Variation der Vakuumbauteile in Material und Beschaffenheit konnten die zuvor ermittelten Vorgaben schließlich erreicht werden. Das neu konstruierte Target-Präparationssystem zur Erzeugung molekularer Gasstrahlen erlaubt nun den Einsatz von bis zu vier unterschiedlich dimensionierten, differentiell gepumpten Stufen. Zudem wurden hochpräzise Piezo-Aktuatoren verbaut, welche die Bewegung von Blenden im Vakuum erlauben, wodurch eine variable Einstellung des lokalen Targetdrucks ermöglicht wird. Die Anpassung der elektrischen Felder des Spektrometers für ein jeweiliges Experiment wurde mittels Simulationen der Teilchentrajektorien, Teilchenflugzeiten sowie der Detektorauflösung durchgeführt.
Da die in dieser Arbeit besprochenen Messungen und Ergebnisse die Wechselwirkungsprozesse von Röntgenstrahlung bzw. Synchrotronstrahlung mit Materie thematisieren, wird die Erzeugung von Synchrotronstrahlung sowohl in Kreisbeschleunigern als auch in den modernen Freie-Elektronen-Lasern (FEL) erklärt und hergeleitet. Der im Röntgenbereich arbeitende Freie-Elektronen-Laser European XFEL, welcher u.A. als Strahlungsquelle für die hier gezeigten Experimente diente, ist eine von derzeit noch wenigen Anlagen ihrer Art weltweit. Seine Lichtintensität in diesem Wellenlängenbereich liegt bis zu acht Größenordnungen über den bisher verwendeten Anlagen für Synchrotronstrahlung.
Beim ersten Einsatz der neuen Apparatur an der Synchrotronstrahlungsanlage SOLEIL wurde der ultraschnelle Dissoziationsprozess von Chlormethan (CH3Cl) untersucht. Während des Zerfallsprozesses nach Anregung durch Röntgenstrahlung werden hochenergetische Auger-Elektronen emittiert, welche in Koinzidenz mit verschiedenen Molekülfragmenten nachgewiesen wurden. Durch den Zerfallsmechanismus der ultraschnellen Dissoziation wird die Auger-Elektronenemission nach resonanter Molekülanregung während der Dissoziation des Moleküls beschrieben. Die kinetische Energie des Auger-Elektrons ist dabei abhängig von seinem Emissionszeitpunkt. Somit können die gemessenen Auger-Elektronen ein „Standbild“ der zeitlichen Abfolge des Dissoziationsprozesses liefern.
Es wird eine detaillierte Beschreibung der Datenanalyse vorgenommen, welche aus Kalibrationsmessungen und einer Interpretation der Messdaten besteht. Die abschließende Betrachtung besteht in der Darstellung der Elektronenemissionswinkelverteilungen im molekülfesten Koordinatensystem. Die Winkelverteilung der Auger-Elektronen wird am Anfang der Dissoziation vom umgebenden Molekül- potential beeinflusst und zeigt deutliche Strukturen entlang der Bindungsachse. Entfernen sich die Bindungspartner voneinander und das Auger-Elektron wird währenddessen emittiert, so verschwinden diese Strukturen zunehmend und eine Vorzugsemissionsrichtung senkrecht zur Molekülachse wird sichtbar.
Die Analyse der Messdaten zur Untersuchung von Multiphotonen-Ionisation an Sauerstoff-Molekülen am Freie-Elektronen-Laser European XFEL ermöglichte unter anderem die Beobachtung „hohler Moleküle“, also Systemen mit Doppelinnerschalen- Vakanzen. Solche Zustände können vor allem durch die sequentielle Absorption zweier Photonen entstehen, wobei die hierbei nötige Photonendichte nur von FEL- Anlagen bereit gestellt werden kann. Hier konnte das Ziel erreicht werden, erstmalig die Emissionswinkelverteilungen der Photoelektronen von mehrfach ionisierten Sauerstoff-Molekülen (O+/O3+-Aufbruchskanal) als Folge der ablaufenden Mechanismen femtosekundengenau zu beobachten. Hierzu wurde ein vereinfachtes Schema der verschiedenen Zerfallsschritte erstellt und schließlich ermittelt, dass der Zerfall durch eine PAPA-Sequenz beschrieben werden kann. Bei dieser handelt es sich um die zweimalige Abfolge von Photoionisation und Auger-Zerfall. Somit werden vier positive Ladungen im Molekül erzeugt. Das zweite Photon des XFEL wird dabei während der Dissoziation der sich Coulomb-abstoßenden Fragmente absorbiert, weshalb es sich um einen zweistufigen Prozess aus Anrege- und Abfrage- Schritt (Pump-Probe) handelt. Schlussendlich gelang zudem der Nachweis von Doppelinnerschalen-Vakanzen im Sauerstoff-Molekül nach Selektion des O2+/O2+- Aufbruchkanals. Hierfür konnten die beiden Möglichkeiten einer zweiseitigen oder einseitigen Doppelinnerschalen-Vakanz getrennt betrachtet werden und ebenfalls erstmalig das Verhalten der Elektronenemission dieser beiden Zustände verglichen werden.
The Compressed Baryonic Matter experiment (CBM) at FAIR and the NA61/SHINE experiment at CERN SPS aim to study the area of the QCD phase diagram at high net baryon densities and moderate temperatures using heavy-ion collisions. The FAIR and SPS accelerators cover energy ranges 2-11 and 13-150 GeV per nucleon respectively in laboratory frame for heavy ions up to Au and Pb. One of the key observables to study the properties of a matter created in such collisions is an anisotropic transverse flow of particles.
In this work, the performance of the CBM experiment for anisotropic flow measurements is studied with Monte-Carlo simulations using gold ions at SIS-100 energies employing different heavy-ion event generators. Also, procedures for centrality estimation and charged hadron identification are described and corresponding frameworks are developed.
The measurement of the reaction plane angle is performed with Projectile Spectator Detector (PSD), which is a hadron calorimeter located at a very forward angle. To prevent radiation damage by the high-intensity ion beam, the PSD has a hole in the center to let the beam pass through. Various combinations of CBM detector subsystems are used to investigate the possible systematic biases in flow and centrality measurements. Effects of detector azimuthal non uniformity and the PSD beam hole size on physics performance are studied. The resulting performance of CBM for flow measurements is demonstrated for identified charged hadron anisotropic flow as a function of rapidity and transverse momentum in different centrality classes.
The measurement techniques developed for CBM were also validated with the experimental data recently collected by the NA61/SHINE experiment at CERN SPS for Pb+Pb collisions at the beam momenta 30A GeV/c. Compared to the existing data from the NA49 experiment at the CERN SPS, the new data allows for a more precise measurement of anisotropic flow harmonics. The fixed target setup of NA61/SHINE also allows extending flow measurements available from the STAR at the RHIC beam energy scan (BES) program to a wide rapidity range up to the forward region where the projectile nucleon spectators appear. In this thesis, an analysis of the anisotropic flow harmonics in Pb+Pb collisions at beam momenta 30A GeV/c collected by the NA61/SHINE experiment in the year 2016 is presented. Flow coefficients are measured relative to the spectator plane estimated with the Projectile Spectators Detector (PSD). The flow coefficients are obtained as a function of rapidity and transverse momentum in different classes of collision centrality. The results are compared with the corresponding NA49 data and the measurements from the RHIC BES program.
Es wurde das Leitfähigkeitsverhalten von reinem, lufthaltigem Wasser bei kontinuierlicher und impulsgetasteter Röntgenbestrahlung (60 kV8) untersucht. Hierbei ergaben sich zwei einander überlagerte Effekte: 1. Ein der Röntgen-Dosisleistung proportionaler irreversibler Leitfähigkeitsanstieg, der vermutlich auf eine Strahlenreaktion des gelösten CO2 zurückzuführen ist, 2. eine reversible Leitfähigkeitserhöhung während der Bestrahlung, die sich mit der Entstehung einer Ionenart mit einer mittleren Lebensdauer von ca. 0,15 sec erklären läßt. Es wird angenommen, daß es sich dabei um Radikalionen O2⊖ handelt, welche durch die Reaktion der als Strahlungsprodukt entstehenden Η-Radikale mit dem gelösten Sauerstoff gebildet werden. Ein möglicher chemischer Reaktionsmechanismus wird angegeben, der zu befriedigender quantitativer Übereinstimmung der Versuchsergebnisse mit Ausbeutewerten und Reaktionskonstanten aus der Literatur führt.
The present research in high energy physics as well as in the nuclear physics requires the use of more powerful and complex particle accelerators to provide high luminosity, high intensity, and high brightness beams to experiments. With the increased technological complexity of accelerators, meeting the demand of experimenters necessitates a blend of accelerator physics with technology. The problem becomes severe when optimization of beam quality has to be provided in accelerator systems with thousands of free parameters including strengths of quadrupoles, sextupoles, RF voltages, etc. Machine learning methods and concepts of artificial intelligence are considered in various industry and scientific branches, and recently, these methods are used in high energy physics mainly for experiments data analysis.
In Accelerator Physics the machine learning approach has not found a wide application yet, and in general the use of these methods is carried out without a deep understanding on their effectiveness with respect to more traditional schemes or other alternative approaches. The purpose of this PhD research is to investigate the methods of machine learning applied to accelerator optimization, accelerator control and in particular on optics measurements and corrections. Optics correction, maximization of acceptance, and simultaneous control of various accelerator components such as focusing magnets is a typical accelerator scenario. The effectiven- ess of machine learning methods in a complex system such as the Large Hadron Collider, which beam dynamics exhibits nonlinear response to machine settings is the core of the study. This work presents successful application of several machine learning techniques such as clustering, decision trees, linear multivariate models and neural networks to beam optics measurements and corrections at the LHC, providing the guidelines for incorporation of machine learning techniques into accelerator operation and discussing future opportunities and potential work in this field.
Proteine sind die Maschinen der Zellen. Um die Funktionalität von zahlreichen zellulären Prozessen zu gewährleisten, müssen Kommunikationssignale innerhalb von Proteinen weitergeleitet werden. Die Weiterleitung einer Störung an einem Ort im Protein zu einer entfernten Stelle, an welcher sie strukturelle und/oder dynamische Änderungen auslöst, wird Allosterie genannt. Zunächst wurde Allosterie hauptsächlich mit großräumigen Konformationsänderungen in Verbindung gebracht, aber später entwickelte sich ein dynamischerer Blickwinkel auf Allosterie in Abwesenheit dieser großräumigen Konformationsänderungen. Die Idee eines allosterischen Pfades bestehend aus konservierten und energetisch gekoppelten Aminosäuren, welche die Signalweiterleitung zwischen entfernten Stellen im Protein vermitteln, entstand. Diese allosterischen Pfade wurden durch zahlreiche theoretische Studien in Zusammenhang mit Pfaden effizienten anisotropen Energieflusses gebracht. Der Energiefluss entlang dieser Netzwerke verknüpft allosterische Signalübertragung mit Schwingungsenergietransfer (VET - vibrational energy transfer). Die Großzahl der Forschungsarbeiten über dynamische Allosterie basiert auf theoretischen Methoden, weil nur wenige geeignete experimentelle Verfahren existieren. Um diesen essentiellen biologischen Prozess der Informationsübertragung besser verstehen zu können, ist die Entwicklung neuer und leistungsstarker experimenteller Instrumente und Techniken daher dringend erforderlich. Die vorliegende Dissertation setzt sich dies zum Ziel.
VET in Proteinen ist aufgrund der Proteingeometrie inhärent anisotrop. Alle globulären Proteine besitzen Kanäle effizienten Energieflusses, von denen vermutet wird, dass sie wichtig für Proteinfunktionen, wie die schnelle Ableitung von überschüssiger Wärme, Ligandenbindung und allosterische Signalweiterleitung, sind. VET kann mit zeitaufgelöster Infrarot (IR) Spektroskopie untersucht werden, bei welcher ein Femtosekunden Anregepuls eines Lasers Schwingungsenergie in ein molekulares System an einer bestimmten Stelle injiziert und ein, nach einem veränderbarem Zeitintervall folgender, IR Abfragepuls die Ausbreitung dieser Schwingungsenergie detektiert. Ein protein-kompatibler und universell einsetzbarer Chromophor, der die Energie eines sichtbaren Photons in Schwingungsenergie konvertiert, wird als Heizelement benötigt um langreichweitige VET Pfade in Proteinen kartieren zu können. Der Azulen (Azu) Chromophor eignet sich dafür, weil er nach Photoanregung des ersten elektronischen Zustandes durch ultraschnelle interne Konversion fast die gesamte injizierte Energie innerhalb von einer Picosekunde in Schwingungsenergie umwandelt. Eingebettet in die nicht-kanonische Aminosäure (ncAA - non-canonical amino acid) ß-(1-Azulenyl)-L-Alanine (AzAla), kann der Azu Rest in Proteine eingebaut werden. Die Ankunft der injizierten Schwingungsenergie an einer bestimmten Stelle im Protein kann mithilfe eines IR Sensors detektiert werden. Die Kombination aus Azu als VET Heizelement und Azidohomoalanine (Aha) als VET Sensor mit transienter IR (TRIR) Spektroskopie wurde schon erfolgreich an kleinen Peptiden in der Dissertation von H. M. Müller-Werkmeister getestet, die der vorliegenden Dissertation in den Laboren der Bredenbeck Gruppe vorausging.
Die Schwingungsfrequenz chemischer Bindungen ist hochempfindlich auf selbst kleine Änderungen der Konformation und Dynamik in der unmittelbaren Umgebung und kann mit IR Spektroskopie gemessen werden, z. B. mit Fourier Transform IR (FTIR) Spektroskopie. IR Spektroskopie bietet eine außergewöhnlich gute Zeitauflösung, die es ermöglicht, dynamische Prozesse in Molekülen auf einer Zeitskala von wenigen Picosekunden zu beobachten, wie z. B. die ultraschnelle Weiterleitung von Schwingungsenergie. Mit zweidimensionaler (2D)-IR Spektroskopie können die Relaxation von schwingungsangeregten Zuständen und strukturelle Fluktuationen um die schwingende Bindung untersucht werden. Allerdings geht die herausragende Zeitauflösung mit limitierter spektraler Auflösung einher. In größeren Molekülen mit zahlreichen Bindungen überlagern sich die Schwingungsbanden und die Ortsauflösung geht verloren. Um diese Limitierung zu überwinden, können IR Marker benutzt werden, chemische Gruppen, die in einer spektral durchsichtigen Region des Protein/Wasser Spektrums (1800 bis 2500 cm-1) absorbieren. Als ncAA können sie kotranslational in Proteine an einer gewünschten Stelle eingebaut werden und so ortsspezifische Informationen aus dem Proteininneren liefern. Aufgrund ihrer geringen Größe, eines relativ großen Extinktionskoeffizientens (350-400 M-1cm-1) und einer hohen Empfindlichkeit auf Änderungen in der lokalen Umgebung sind organische Azide (N3) wie zum Beispiel Aha besonders geeignete IR Marker. Aha kann als Methionin Analogon ins Protein eingebaut werden.
...
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
Landau's Fermi liquid theory has been the main tool for investigating interactions between fermions at low energies for more than 50 years. It has been successful in describing, amongst other things, the mass enhancement in ³He and the thermodynamics of a large class of metals. Whilst this in itself is remarkable given the phenomenological nature of the original theory, experiments have found several materials, such as some superconducting and heavy-fermion materials, which cannot be described within the Fermi liquid picture. Because of this, many attempts have been made to understand these ''non Fermi liquid'' phases from a theoretical perspective. This will be the broad topic of the first part of this thesis and will be investigated in Chapter 2, where we consider a two-dimensional system of electrons interacting close to a Fermi surface through a damped gapless bosonic field. Such systems are known to give rise to non Fermi liquid behaviour. In particular we will consider the Ising-nematic quantum critical point of a two-dimensional metal. At this quantum critical point the Fermi liquid theory breaks down and the fermionic self-energy acquires the non Fermi liquid like {omega}²/³ frequency dependence at lowest order and within the canonical Hertz-Millis approach to quantum criticality of interacting fermions. Previous studies have however shown that, due to the gapless nature of the electronic single-particle excitations, the exponent of 2/3 is modified by an anomalous dimension {eta_psi} which changes, not only the exponent of the frequency dependence, but also the exponent of the momentum dependence of the self-energy. These studies also show that the usual 1/N-expansion breaks down for this problem. We therefore develop an alternative approach to calculate the anomalous dimensions based on the functional renormalization group, which will be introduced in the introductory Chapter 1. Doing so we will be able to calculate both the anomalous dimension renormalizing the exponent of the frequency dependence and the exponent renormalizing the momentum dependence of the self-energy. Moreover we will see that an effective interaction between the bosonic fields, mediated by the fermions, is crucial in order to obtain these renormalizations.
In the second part of this thesis, presented in Chapter 3, we return to Fermi liquid theory itself. Indeed, despite its conceptual simplicity of expressing interacting electrons through long-lived quasi-particles which behave in a similar fashion as free particles, albeit with renormalized parameters, it remains an active area of research. In particular, in order to take into account the full effects of interactions between quasi-particles, it is crucial to consider specific microscopic models. One such effect, which is not captured by the phenomenological theory itself, is the appearance of non-analytic terms in the expansions of various thermodynamic quantities such as heat-capacity and susceptibility with respect to an external magnetic field, temperature, or momentum. Such non-analyticities may have a large impact on the phase diagram of, for example, itinerant electrons near a ferromagnetic quantum phase transition. Inspired by this we consider a system of interacting electrons in a weak external magnetic field within Fermi liquid theory. For this system we calculate various quasi-particle properties such as the quasi-particle residue, momentum-renormalization factor, and a renormalization factor which relates to the self-energy on the Fermi surface. From these renormalization factors we then extract physical quantities such as the renormalized mass and renormalized electron Lande g-factor. By calculating the renormalization factors within second order perturbation theory numerically and analytically, using a phase-space decomposition, we show that all renormalization factors acquire a non-analytic term proportional to the absolute value of the magnetic field. We moreover explicitly calculate the prefactors of these terms and find that they are all universal and determined by low-energy scattering processes which we classify. We also consider the non-analytic contributions to the same renormalization factors at finite temperatures and for finite external frequencies and discuss possible experimental ways of measuring the prefactors. Specifically we find that the tunnelling density of states and the conductivity acquire a non-analytic dependence on magnetic field (and temperature) coming from the momentum-renormalization factor. For the latter we discuss how this relates to previous works which show the existence of non-analyticities in the conductivity at first order in the interaction.
Atomistic molecular dynamics approach for channeling of charged particles in oriented crystals
(2015)
Der Gitterführungseffekt ist der Prozess der Ausbreitung von geladenen Teilchen entlang der Ebenen oder Achsen von kristallinen Materialien. Seit den 1960er Jahren ist dieser Effekt weitgehend theoretisch und experimentell untersucht worden. Dieser Effekt wurde für die Manipulation von Hochenergiestrahlen, die Hochpräzisionsstruktur- und -fehleranalyse von kristallinen Medien und die Herstellung von hochenergetischer Strahlung angewendet. Zur Abstimmung der Parameter der Gitterführung und Gitterführungsstrahlung wurde dieser Prozess für den Fall von künstlich nanostrukturierten Materialien, wie gebogenen Kristallen, Nanoröhren und Fullerit, angenommen. In den letzten Jahren wurde das Konzept des kristallinen Undulators formuliert und getestet, das besondere Eigenschaften der Strahlung aufgrund der Gitterführung von Projektilen in regelmäßig gebogenen Kristallen vorhersagt.
In dieser Arbeit werden die Prozesse der Gitterführung von Sub- und Multi-GeV-Elektronen und -Positronen durch den atomistischen Molekulardynamik-Ansatz untersucht. Die Ergebnisse dieser Studien wurden in einer Reihe von Artikeln während meiner Promotion in Frankfurt vorgestellt. Dieser Ansatz ermöglicht die Simulation komplexer Fälle von Gitterführung in geraden, gebogenen und periodisch gebogenen Kristallen aus reinen kristallinen Materialien und von gemischten Materialien wie Si-Ge-Kristallen, in mehrschichtigen und nanostrukturierten kristallinen Systemen. Die Arbeit beschreibt die Methode der Simulationen, stellt Ergebnisse von Simulationen für verschiedene Fälle vor und vergleicht die Ergebnisse von Simulationen mit aktuellen experimentellen Daten. Die Ergebnisse werden mit Schätzungen der dechanneling-Länge verglichen, dem Anteil der gittergeführten Projektile, der Winkelverteilung der ausgehenden Projektile und des Strahlungsspektrums.
Die vorliegende Arbeit handelt von der Entwicklung, dem Bau, den Zwischenmessungen sowie den abschließenden Tests unter kryogenen Bedingungen einer neuartigen, supraleitenden CH-Struktur für Strahlbetrieb mit hoher Strahllast. Diese Struktur setzt das Konzept des erfolgreich getesteten 19-zelligen 360 MHz CH-Prototypen fort, der einen weltweiten Spitzenwert in Bezug auf Beschleunigungsspannung im Niederenergiesegment erreichte, jedoch wurden einige Aspekte weiterentwickelt bzw. den neuen Rahmenbedingungen angepasst. Bei dem neuen Resonator wurde der Schwerpunkt auf ein kompaktes Design, effektives Tuning, leichte Präparationsmöglichkeiten und auf den Einsatz eines Leistungskopplers für Strahlbetrieb gelegt. Die Resonatorgeometrie besteht aus sieben Beschleunigungszellen, wird bei 325 MHz betrieben und das Geschwindigkeitsprofil ist auf eine Teilcheneingangsenergie von 11.4 MeV/u ausgelegt. Veränderungen liegen in der um 90° gedrehten Stützengeometrie vor, um Platz für Tuner und Kopplerflansche zu gewährleisten, und in der Verwendung von schrägen Stützen am Resonatorein- und ausgang zur Verkürzung der Tanklänge und Erzielung eines flachen Feldverlaufs. Weiterhin wurden pro Tankdeckel zwei zusätzliche Spülflansche für die chemische Präparation sowie für die Hochdruckspüle mit hochreinem Wasser hinzugefügt. Das Tuning der Kavität erfolgt über einen neuartigen Ansatz, indem zwei bewegliche Balgtuner in das Resonatorvolumen eingebracht werden und extern über eine Tunerstange ausgelenkt werden können. Der Antrieb der Stange soll im späteren Betrieb wahlweise über einen Schrittmotor oder einen Piezoaktor stattfinden. Für ein langsames/ statisches Tuning kann der Schrittmotor den Tuner im Bereich +/- 1 mm auslenken, um größeren Frequenzabweichungen in der Größenordnung 100 kHz nach dem Abkühlen entgegenzuwirken. Das schnelle Tuning im niedrigen kHz-Bereich wird von einem Piezoaktor übernommen, welcher den Balg um einige µm bewegen kann, um Microphonics oder Lorentz-Force-Detuning zu kompensieren. Der Resonator wird von einem aus Titan bestehendem Heliummantel umgeben, wodurch ein geschlossener Heliumkreislauf gebildet wird.
Derzeit befinden sich mehrere Projekte in der Planung bzw. im Bau, welche auf eine derartige Resonatorgeometrie zurückgreifen könnten. An der GSI basiert der Hauptteil des zukünftigen cw LINAC auf supraleitenden CH-Strukturen, um einen Strahl für die Synthese neuer, superschwerer Elemente zu liefern. Weiterhin könnte ein Upgrade des vorhandenen GSI UNILAC durch den Einsatz von supraleitenden CH-Resonatoren gestaltet werden. Zudem besteht die Möglichkeit, die bisherige Alvarez-Sektion des UNILAC alternativ durch eine kompakte, supraleitende CH-Sektion zu realisieren. Ebenfalls sollen die beiden parallelbetriebenen Injektorsektionen des MYRRHA-Projektes durch den Einsatz von supraleitenden CH-Strukturen erfolgen.
Die vorliegende Dissertation stellt die Strahldynamikdesigns zweier Hochfrequenzquadrupol-Linearbeschleuniger bzw. Radio Frequency Quadrupoles (RFQs) vor: das fur den RFQ des Protonen-Linearbeschleunigers (p-Linac) des FAIR2-Projekts an der GSI3 Darmstadt sowie einen ersten Designentwurf für einen kompakten RFQ, der u.a. zur Erzeugung von Radioisotopen für medizinische Zwecke genutzt werden könnte. Der Schwerpunkt liegt auf dem ersten Design.
This thesis discusses important questions of the beam dynamics in the proton-lead operation in the Large Hadron Collider (LHC) at CERN in Geneva. In two time blocks of several weeks in the years 2013 and 2016, proton-lead collisions have so far been successfully generated in the LHC and used by the experiments at the LHC. One reason for doubts regarding the successful operation in proton-lead configuration was the fact that the beams have to be accelerated with different revolution frequencies. There is long-range repulsion between the beams, since both beams share the beam chamber around the interaction points. Because of the different revolution frequencies, the positions of the interaction between the beams shift each revolution. This can lead to resonant excitation and to an increase in the transverse beam emittance, as was observed in the Relativistic Heavy-Ion Collider (RHIC). In this thesis, simulations for the LHC, RHIC and the High-Luminosity Large Hadron Collider (HL-LHC) are performed with a new model. The results for RHIC show relative growth rates of the emittances of the gold beam in gold-deuteron operation in RHIC from 0.1 %/s to 1.5 %/s. Growth rates of this magnitude were observed experimentally in RHIC. Simulations for the LHC show no significant increase of the emittance of the lead beam for different intensities of the counter-rotating beam. The simulation results confirm the measured stability of the beams in the LHC and the issue of strongly increasing emittances in RHIC is reproduced. Also, no significant increase of the emittance is predicted for the Future Circular Collider (FCC) and the HL-LHC.
Using a frequency-map analysis, this work verifies whether the interaction of the lead beam with the much smaller proton beam in the proton-lead operation of the LHC leads to diffusion within the lead beam. Experiences at HERA at DESY in Hamburg and at SppS at CERN have shown that the lifetime of the larger beam can rapidly decrease under certain circumstances. The results of the simulation show no chaotic dynamics near the beam centre of the lead beam. This result is supported by experimental observation.
A program code has been developed which calculates the beam evolution in the LHC by means of coupled differential equations. This study shows that the growth rates of the lead beam due to intra-beam scattering is overestimated and that particle bunches of the lead beam lose more intensity than assumed in the model. The analysis also shows that bunches colliding in a detector suffer additional losses that increase with decreasing crossing angle at the interaction point.
In this work, 2016 data from beam-loss monitors in combination with the luminosity and the loss rate of the beam intensity are used to determine the cross section of proton-lead collisions at the center-of-mass energy of 8.16 TeV. Beam-loss monitors that mainly detect beam losses that are not caused by the collision process itself are used to determine the total cross section via regression. An analysis of the data recorded in 2016 at the center-of-mass energy of 8.16 TeV resulted in a total cross section of σ=(2.32±0.01(stat.)±0.20(sys.)) b. This corresponds approximately to a hadronic cross section of σ(had)=(2.24±0.01(stat.)±0.21(sys.)) b. This value deviates only by 5.7 % from the theoretical value σ(had)=(2.12±0.01) b.
The simulation code for determining the beam evolution is also used to estimate the integrated luminosity of a future one-month run with proton-lead collisions. The result of the study shows that in the future the luminosity in the ATLAS and CMS experiments will increase from 15/nb per day in 2016 to 30/nb per day, which is a significant increase in terms of the performance. This operation, however, requires the use of the TCL collimators to protect the dispersion suppressors at ATLAS and CMS from collision fragments.
This work also gives an outlook on the expected luminosity production in proton-nucleus operation using ion species lighter than lead ions. For example, a change from proton-lead to proton-argon collisions would increase the integrated luminosity from monthly 0.8/nb to 9.4/nb in ATLAS and CMS. This is an increase of one order of magnitude and approximately a doubling of the integrated nucleon-nucleon luminosity. There may be a test operation with proton-oxygen collisions in 2023, which will last only a few days and will be operated with a low luminosity. The LHCf experiment (LHCb experiment) would achieve the desired integrated luminosity of 1.5/nb (2/nb) within 70h (35h) beam time.
This thesis has two main parts.
The first part is based on our publication [1], where we use perturbation theory to calculate decay rates of magnons in the Kitaev-Heisenberg-Γ (KHΓ) model. This model describes the magnetic properties of the material α-RuCl 3 , which is a candidate for a Kitaev spin liquid. Our motivation is to validate a previous calculation from Ref. [2]. In this thesis, we map out the classical phase diagram of the KHΓ model. We use the Holstein-Primakoff
transformation and the 1/S expansion to describe the low temperature dynamics of the Kitaev-Heisenberg-Γ model in the experimentally relevant zigzag phase by spin waves. By parametrizing the spin waves in terms of hermitian fields, we find a special parameter region within the KHΓ model where the analytical expressions simplify. This enables us to construct the Bogoliubov transformation analytically. For a representative point in the special parameter region, we use these results to numerically calculate the magnon damping, which is to leading order caused by the decay of single magnons into two. We also calculate the dynamical structure factor of the magnons.
The second part of this thesis is based on our publication [3], where we use the functional renormalization group to analyze a discontinuous quantum phase transition towards a non-Fermi liquid phase in the Sachdev-Ye-Kitaev (SYK) model. In this thesis, we perform a disorder average over the random interactions in the SYK model. We argue that in the thermodynamic limit, the average renormalization group (RG) flow of the SYK model is identical to the RG flow of an effective disorder averaged model. Using the functional RG, we find a fixed point describing the discontinuous phase transition to the non-Fermi liquid phase at zero temperature. Surprisingly, we find a finite anomalous dimension of the fermions, which indicates critical fluctuations and is unusual for a discontinuous transition. We also determine the RG flow at zero temperature, and relate it to the phase diagram known from the literature.
This Ph. D. thesis with the title "Characterisation of laser-driven radiation beams: Gamma-ray dosimetry and Monte Carlo simulations of optimised target geometry for record-breaking efficiency of MeV gamma-sources" is dedicated to the study of the acceleration of electrons by intense sub-picosecond laser pulses propagating in a sub-millimeter plasma with near-critical electron density (NCD) and resulting generation of the gamma bremsstrahlung and positrons in the targets of different materials and thickness.
Laser-driven particle acceleration is an area of increasing scientific interest since the recent development of short pulse, high-intensity laser systems. The interaction of intense high-energy, short-pulse lasers with solid targets leads to the production of high-energy electrons in the relativistic laser intensity regime of more than 1018 W /cm2. These electrons play the leading role in the first stage of the interaction of laser with matter, which leads to the creation of laser sources of particles and radiation. Therefore, the optimisation of the electron beam parameters in the direction of increasing the effective temperature and beam charge, together with a slight divergence, plays a decisive role, especially for further detection and characterisation of laser-driven photon and positron beams.
In the context of this work, experiments were carried out at the PHELIX laser system (Petawatt High-Energy Laser for Heavy Ion eXperiments) at GSI Helmholtz Center for Heavy-Ion Research GmbH in Darmstadt, Germany. This thesis presents a thermoluminescence dosimetry (TLD) based method for the measurement of bremsstrahlung spectra in the energy range from 30 keV to 100 MeV. The results of the TLD measurements reinforced the observed tendency towards the strong increase of the mean electron energy and number of super-ponderomotive electrons. In the case of laser interaction with long-scale NCD-plasmas, the dose caused by the gamma-radiation measured in the direction of the laser pulse propagation showed a 1000-fold increase compared to the high contrast shots onto plane foils and doses measured perpendicular to the laser propagation direction for all used combinations of targets and laser parameters.
In this thesis I present novel characterisation method using a combination of TLD measurements and Monte Carlo FLUKA simulations applicable to laser-driven beams. The thermoluminescence detector-based spectrometry method for simultaneous detection of electrons and photons from relativistic laser-induced plasmas initially developed by Behrens et al. (Behrens et al., 2003) and further applied in experiments at PHELIX laser (Horst et al., 2015) delivered good spectral information from keV energies up to some MeV, but as it was presented in (Horst et al., 2015) this method was not really suitable to resolve the content of photon spectra above 10 MeV because of the dominant presence of electrons. Therefore, I created new evaluation method of the incident electron spectra from the readings of TLDs. For this purpose, by means of MatLab programming language an unfolding algorithm was written. It was based on a sequential enumeration of matching data series of the dose values measured by the dosimeters and calculated with of FLUKA-simulations. The significant advantage of this method is the ability to obtain the spectrum of incident electrons in the low energy range from 1 keV, which is very difficult to measure reliably using traditional electron spectrometers.
The results of the evaluation of the effective temperature of super-ponderomotive electrons retrieved from the measured TLD-doses by means of the Monte-Carlo simulations demonstrated, that application of low density polymer foam layers irradiated by the relativistic sub-ps laser pulse provided a strong increase of the electron effective temperature from 1.5 - 2 MeV in the case of the relativistic laser interaction with a metallic foil up to 13 MeV for the laser shots onto the pre-ionized foam and more than 10 times higher charge carried by relativistic electrons.
The progressive simulation method of whole electron spectra described with two -temperatures Maxwellian distribution function has been developed and the results of dose simulations were compared with the acquired experimental data. The advanced feature of this method, which distinguishes it from the results of the simulation of the photon spectrum using the interaction with the target of mono-energetic electron beams (Nilgün Demir, 2013; Nilgün Demir, 2019) or the initial electron spectrum expressed as a function of one electron temperature (Fiorini, 2012), is the ability to simulate the initial electron spectrum described by the Maxwellian distribution function with two temperatures.
The important objective of this thesis was dedicated to the study and characterisation of laser-driven photon beams. In addition to this, the positron beams were evaluated. The investigation of bremsstrahlung photons and positrons spectra from high Z targets by varying the target thickness from 10 µm to 4 mm in simulated models of the interactions of electron spectra with Maxwellian distribution functions allowed to define an optimal thickness when the fluences of photons and positrons are maximal. Furthermore based on the results of FLUKA simulations the gold material was found to be the most suitable for the future experiments as e − γ target because of its highest bremsstrahlung yield.
Additionally Monte Carlo simulations were performed applying the obtained electron beam parameters from the electron acceleration process in laser-plasma interactions simulated with particle-in-cell (PIC) code for two laser energies of 20 J and 200 J. The corresponding electron spectra were imported into a Monte Carlo code FLUKA to simulate the production process of bremsstrahlung photons and positrons in Au converter. FLUKA simulations showed the record conversion of efficiency in MeV gammas can reach 10%, which reinforces the generation of positrons. The obtained results demonstrate the advantages of long-scale plasmas of near critical density (NCD) to increase the parameters of MeV particles and photon beams generated in relativistic laser-plasma interaction. The efficiency of the laser-driven generation of MeV electrons and photons by application of low-density polymer foams is essentially enhanced.
High-energetic heavy-ion collisions offer the unique opportunity to produce and to study dense nuclear matter in the laboratory. The future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, will provide beams of heavy nuclei up to kinetic energies of 11 GeV/nucleon. At these energies, the nuclear matter in the collision zone of two nuclei will be compressed to densities of up to 5 − 10 times the saturation density of atomic nuclei, similar to matter densities existing in the core of massive neutron stars. Under those conditions, nucleons are expected to melt and form a new state of matter, which consists of quarks and gluons, the so called Quark-Gluon Plasma (QGP). The search for such a phase transition from hadronic to partonic matter, and the exploration of the nuclear matter equation-of-state at high densities are the major goals of heavy ion experiments worldwide.
The observables, which are proposed to probe the properties of dense nuclear matter and possible phase transitions, include multi-strange hyperons, antibaryons, lepton pairs, collective flow of identified particles, fluctuations and correlations of various particles, particles containing charm quarks, and hypernuclei. These observables have to be measured in multi-dimensions, i.e. as function of collision centrality, rapidity, transverse momentum, energy, emission angle, etc., which requires extremely high statistics. Moreover, some of these particles are produced very rarely.
Therefore, the Compressed Baryonic Matter (CBM) experiment at FAIR is designed to run at collision rates of up to 10 MHz, in order to perform measurements with unprecedented precision. Due to the complicated decay topology of many observables, no hardware trigger can be applied, and the data have to be analysed online in order to filter out the interesting events.
This strategy requires free-streaming read-out electronics, which provides time stamps to all detector signals, a high performance computer center, and high-speed reconstruction algorithms, which provide an online track and event reconstruction based on time and position information of the detector hits (”4-D“ reconstruction).
The core detector of the CBM experiment is the Silicon Tracking System (STS). The main task of the STS is to provide track reconstruction and momentum de- termination of charged particles originating from beam-target interactions. To fulfil the whole tasks the STS is located in the large gap of a superconducting dipole magnet with a bending power of 1 Tm providing momentum measurements for charged particles. The STS comprises 8 detector stations, which are positioned from 30 cm to 100 cm downstream the target. The corresponding active area of the stations grows up from 40×50 cm 2 up to 100×100 cm 2 with a totalarea of 4 m2. The silicon double-sided sensors exhibit 1024 strips on each side with a stereo angle at p-side of 7.5 ◦ and a strip pitch of 58 μm. The strip length ranges from 2 cm for sensors located in a close vicinity to the beam axis, up to 12 cm for other sensors where the flux of the reaction products drops down substantially. In total, the STS consist of 896 sensors mounted on 106 detector ladders. The detector readout electronics dissipates 40 kW and will be equipped with a CO 2 bi-phase cooling system. The detector including electronics will be mounted in a thermal enclosure to allow for sensor operation at below −5 ◦ C which minimizes radiation induced leakage currents.
The task of the STS is to measure the trajectories of up to 800 charged particles per collision with an efficiency of more than 95% and a momentum resolution of 1 − 2%. In order to guarantee the required performance over the full lifetime of the CBM experiment, the detector system has to have a low material budget, a high granularity, a high signal-to-noise (SNR) ratio, and a high radiation tolerance. As a result of optimisation studies, the STS consists of double-sided silicon microstrip sensors, about 300 μm thick, which have to provide a SNR ratio of more than 10, even after radiation with the expected equivalent lifetime fluence of 10 14 1 MeV n eq cm −2.
This thesis is devoted to the characterization of double-sided silicon microstrip sensors with an emphasis on investigation of their radiation hardness. Different prototypes of double sided silicon sensors produced by two vendors have been irradiated by 23 MeV protons up to the double life time fluence for the CBM experiment (2 × 10 14 1 MeV n eq cm −2 ).
The sensor properties have been characterised before and after irradiation. It was found, that after irradiation with a double lifetime fluence the leakage current increased 1000 times, which results in an increased shot noise. Moreover, the relative charge collection efficiency of irradiated with respect to non-irradiated sensors drops down to 85% for the lifetime equivalent fluence, and down to 73% for the double lifetime fluence, both for the p-side and n-side. For non-irradiated sensors the SNR was found to be in the range of 20 − 25, whereas for irradiated sensors it dropped down to 12 − 17.
In addition to the sensor characterization, a part of this thesis was devoted to the optimisation of the sensor readout scheme. In order to investigate the possible increase of SNR, and to reduce the number of readout channels in the outer aperture of STS, three versions of routing lines have been realized for the p-side readout of the sensor prototype, and have been tested in the laboratory and under beam conditions.
The tests have been performed with different inclination angles between beam direction and sensor surface, corresponding to the polar angle acceptance of the CBM experiment, which is from 2.5 ◦ to 25 ◦.
As a result of the studies carried out in this thesis work, the radiation hardness of the double-sided silicon microstrip sensors developed for the CBM STS detector was confirmed. Also the advantage of individual read-out of sensor channels in the lateral regions of the detector was verified. This allowed to start the tendering process for sensor series production in industry, an important step towards the construction of the detector in the coming years.
In this thesis, a novel 257 kHz chopper device was numerically developed, technically designed and experimentally commissioned; a 4-solenoid, low-energy ion beam transport line was numerically investigated, installed and experimentally commissioned; and a novel massless beam-separation system was numerically developed.
The chopper combines a pulsed electric field with a static magnetic field in an ExB or Wien-filter type field configuration. Chopped beam pulses with a 257 kHz repetition rate and rise times of 110 ns were experimentally achieved using a 14 keV helium beam.
Due to the achieved results, the complete LEBT line for the future Frankfurt Neutron Source FRANZ is ready to deliver a dc or a pulsed beam. At the same time, the LEBT section represents an attractive test stand for the study of low-energy ion beams. It combines magnetic lenses, which allow space-charge compensated beam transport, and a chopper system capable of producing short beam pulses in the hundred nanosecond range. Since these beam pulses are transported onwards, their longitudinal and transverse properties can be analyzed. The pulse duration and time of flight are well below the rise time for the space-charge compensation through residual gas ionization. This opens the possibility for dedicated investigations of the transport of short, low-energy beam pulses including longitudinal and transverse space-charge effects and of relevant issues like the dynamics of space-charge compensation and electron effects in short pulses.
The realization of a fast and robust closed orbit feedback (COFB) system for the on-ramp orbit correction at SIS18 synchrotron of FAIR project is reported in this thesis. SIS18 has some peculiar behaviors including on-ramp optics variation, very short lengths of the ramps (200 ms to 1 s) and a cycle-to-cycle variation of beam parameters. The realized fast COFB system being robust against above mentioned features of SIS18 is a first of its kind and the course to its realization led to some novel contributions in the field of closed orbit correction. A new method relying on the discrete Fourier transform (DFT)-based decomposition of the orbit response matrix (ORM) has been introduced, exploiting the symmetry in the arrangement of beam position monitors (BPMs) and the corrector magnets in the synchrotrons. A nearest-circulant approximation has also been introduced for synchrotrons having slight deviation from the symmetry, making the method applicable to a vast majority of synchrotrons. Moreover, the performance and the stability analysis of COFB systems in the presence of ORM mismatch between the synchrotron and the feedback controller is presented. The COFB systems are divided into slow and fast regimes and a new stability criterion consistent with measurements, is introduced. The practicality of the criterion is verified experimentally at COSY Jülich and is used for the analysis of various sources of ORM mismatch at SIS18. The commissioning of the SIS18 COFB system is also reported in detail which relies on Libera Hadron as the main hardware resource for the controller implementation. The on-ramp orbit correction is demonstrated for the horizontal plane of SIS18, for the disturbance rejection up to 600 Hz.
In this thesis, the flow coefficients vn of the orders n = 1 − 6 are studied for protons and light nuclei in Au+Au collisions at Ebeam = 1.23 AGeV, equivalent to a center-of-mass energy in the nucleon-nucleon system of √sNN = 2.4 GeV. The detailed multi-differential measurement is performed with the HADES experiment at SIS18/GSI. HADES, with its large acceptance, covering almost full azimuth angle, combined with its high mass-resolution and good particle-identification capability, is well equipped to study the azimuthal flow pattern not only for protons, deuterons, and tritons but also for charged pions, kaons, the φ-mesons, electrons/positrons, as well as light nuclei like helions and alphas. The high statistics of more than seven billion Au-Au collisions recorded in April/May 2012 with HADES enables for the first time the measurement of higher order flow coefficients up to the 6th harmonic. Since the Fourier coefficient of 7th and 8th order are beyond the statistical significance only an upper bound is given. The Au+Au collision system is the largest reaction system with the highest particle multiplicities, which was measured so far with HADES. A dedicated correction method for the flow measurement had to be developed to cope with the reconstruction in-efficiencies due to occupancies of the detector system. The systematical bias of the flow measurement is studied and several sources of uncertainties identified, which mainly arise from the quality selection criteria applied to the analyzed tracks, the correction procedure for reconstruction inefficiencies, the procedures for particle identification (PID) and the effects of an azimuthally non-uniform detector acceptance. The systematic point-to-point uncertainties are determined separately for each particle type (proton, deuteron and triton), the order of the flow harmonics vn, and the centrality class. Further, the validity of the results is inspected in the range of their evaluated systematic uncertainties with several consistency checks. In order to enable meaningful comparisons between experimental observations and predictions of theoretical models, the classification of events should be well defined and in sufficiently narrow intervals of impact parameter. Part of this work included the implementation of the procedure to determine the centrality and orientation of the reaction.
In the conclusion the experimental results are discussed, including various scaling properties of the flow harmonics. It is found that the ratio v4/v2 for protons and light nuclei (deuterons and tritons) at midrapidity for all centrality classes approaches values close to 0.5 at high transverse momenta, which was suggested to be indicative for an ideal hydrodynamic behaviour. A remarkable scaling is observed in the pt dependence of v2 (v4) at mid-rapidity of the three hydrogen isotopes, when dividing by their nuclear mass number A (A^2) and pt by A. This is consistent with naive expectations from nucleon coalescence, butraises the question whether this mass ordering can also be explained by a hydrodynamical-inspired approach, like the blast-wave model. The relation of v2 and v4 to the shape of the initial eccentricity of the collision system is studied. It is found that v2 is independent of centrality for all three particle species after dividing it by the averaged second order participant eccentricity v2/⟨ε2⟩. A similar scaling is shown for v4 after division by ⟨ε2⟩^2.
The strong force is one of the four fundamental interactions, and the theory of it is called Quantum Chromodynamics (QCD). A many-body system of strongly interacting particles (QCD matter) can exist in different phases depending on temperature (T) and baryonic chemical potential (µB). The phases and transitions between them can be visualized as µB−T phase diagram. Extraction of the properties of the QCD matter, such as compressibility, viscosity and various susceptibilities, and its Equation of State (EoS) is an important aspect of the QCD matter study. In the region of near-zero baryonic chemical potential and low temperatures the QCD matter degrees of freedom are hadrons, in which quarks and gluons are confined, while at higher temperatures partonic (quarks and gluons) degrees of freedom dominate. This partonic (deconfined) state is called quark-gluon plasma (QGP) and is intensively studied at CERN and BNL. According to lattice QCD calculations at µB=0 the transition to QGP is smooth (cross-over) and takes place at T≈156 MeV. The region of the QCD phase diagram, where matter is compressed to densities of a few times normal nuclear density (µB of several hundreds MeV), is not accessible for the current lattice QCD calculations, and is a subject of intensive research. Some phenomenological models predict a first order phase transition between hadronic and partonic phases in the region of T≲100 MeV and µB≳500 MeV. Search for signs of a possible phase transition and a critical point or clarifying whether the smooth cross-over is continuing in this region are the main goals of the near future explorations of the QCD phase diagram.
In the laboratory a scan of the QCD phase diagram can be performed via heavy-ion collisions. The region of the QCD phase diagram at T≳150 MeV and µB≈0 is accessible in collisions at LHC energies (√sNN of several TeV), while the region of T≲100 MeV and µB≳500 MeV can be studied with collisions at √sNN of a few GeV. The QCD matter created in the overlap region of colliding nuclei (fireball) is rapidly expanding during the collision evolution. In the fireball there are strong temperature and pressure gradients, extreme electromagnetic fields and an exchange of angular momentum and spin between the system constituents. These effects result in various collective phenomena. Pressure gradients and the scattering of particles, together with the initial spatial anisotropy of the density distribution in the fireball, form an anisotropic flow - a momentum (azimuthal) anisotropy in the emission of produced particles. The correlation of particle spin with the angular momentum of colliding nuclei leads to a global polarization of particles. A strong initial magnetic field in the fireball results in a charge dependence and particle-antiparticle difference of flow and polarization.
Anisotropic flow is quantified by the coefficients vₙ from a Fourier decomposition of the azimuthal angle distribution of emitted particles relative to the reaction plane spanned by beam axis and impact parameter direction. The first harmonic coefficient v₁ quantifies the directed flow - preferential particle emission either along or opposite to the impact parameter direction. The v₁ is driven by pressure gradients in the fireball and thus probes the compressibility of the QCD matter. The change of the sign of v₁ at √sNN of several GeV is attributed to a softening of the EoS during the expansion, and thus can be an evidence of the first order phase transition. The global polarization coefficient PH is an average value of the hyperon’s spin projection on the direction of the angular momentum of the colliding system. It probes the dynamics of the QCD matter, such as vorticity, and can shed light on the mechanism of orbital momentum transfer into the spin of produced particles.
In collisions at √sNN of several GeV, which probe the region of the QCD phase diagram at T≲100 MeV and µB≳500 MeV, hadron production is dominated by u and d quarks. Hadrons with strange quarks are produced near the threshold, what makes their yields and dynamics sensitive to the density of the fireball. Thus measurement of flow and polarization, in particular of (multi-)strange particles, provides experimental constraints on the EoS, that allows to extract transport coefficients of the QCD matter from comparison of data with theoretical model calculations of heavy-ion collisions.
For continuation of the annotation see the PDF of thesis
A new era in experimental nuclear physics has begun with the start-up of the Large Hadron Collider at CERN and its dedicated heavy-ion detector system ALICE. Measuring the highest energy density ever produced in nucleus-nucleus collisions, the detector has been designed to study the properties of the created hot and dense medium, assumed to be a Quark-Gluon Plasma.
Comprised of 18 high granularity sub-detectors, ALICE delivers data from a few million electronic channels of proton-proton and heavy-ion collisions.
The produced data volume can reach up to 26 GByte/s for central Pb–Pb
collisions at design luminosity of L = 1027 cm−2 s−1 , challenging not only the data storage, but also the physics analysis. A High-Level Trigger (HLT) has been built and commissioned to reduce that amount of data to a storable value prior to archiving with the means of data filtering and compression without the loss of physics information. Implemented as a large high performance compute cluster, the HLT is able to perform a full reconstruction of all events at the time of data-taking, which allows to trigger, based on the information of a complete event. Rare physics probes, with high transverse momentum, can be identified and selected to enhance the overall physics reach of the experiment.
The commissioning of the HLT is at the center of this thesis. Being deeply embedded in the ALICE data path and, therefore, interfacing all other ALICE subsystems, this commissioning imposed not only a major challenge, but also a massive coordination effort, which was completed with the first proton-proton collisions reconstructed by the HLT. Furthermore, this thesis is completed with the study and implementation of on-line high transverse momentum triggers.
The equation of state (EoS) of matter at extremely high temperatures and densities is currently not fully understood, and remains a major challenge in the field of nuclear physics. Neutron stars harbor such extreme conditions and therefore serve as celestial laboratories for constraining the dense matter EoS. In this thesis, we present a novel algorithm that utilizes the idea of Bayesian analysis and the computational efficiency of neural networks to reconstruct the dense matter equation of state from mass-radius observations of neutron stars. We show that the results are compatible with those from earlier works based on conventional methods, and are in agreement with the limits on tidal deformabilities obtained from the gravitational wave event, GW170817. We also observe that the resulting squared speed of sound from the reconstructed EoS features a peak, indicating a likely convergence to the conformal limit at asymptotic densities, as expected from quantum chromodynamics. The novel algorithm can also be applied across various fields faced with computational challenges in solving inverse problems. We further examine the efficiency of deep learning methods for analyzing gravitational waves from compact binary coalescences in this thesis. In particular, we develop a deep learning classifier to segregate simulated gravitational wave data into three classes: signals from binary black hole mergers, signals from binary neutron star mergers, or white noise without any signals. A second deep learning algorithm allows for the regression of chirp mass and combined tidal deformability from simulated binary neutron star mergers. An accurate estimation of these parameters is crucial to constrain the underlying EoS. Lastly, we explore the effects of finite temperatures on the binary neutron star merger remnant from GW170817. Isentropic EoSs are used to infer the frequencies of the rigidly rotating remnant and are noted to be significantly lower compared to previous estimates from zero temperature EoSs. Overall, this thesis presents novel deep learning methods to constrain the neutron star EoS, which will prove useful in future, as more observational data is expected in the upcoming years.
Construction and commissioning of a setup to study ageing phenomena in high rate gas detectors
(2014)
In high-rate heavy-ion experiments, gaseous detectors encounter big challenges in terms of degradation of their performance due to a phenomenon dubbed ageing. In this thesis, a setup for high precision ageing studies has been constructed and commissioned at the GSI detector laboratory. The main objective is the study of ageing phenomena evoked by materials used to build gaseous detectors for the Compressed Baryonic Matter (CBM) experiment at the future Facility for Antiproton and Ion Research (FAIR).
The precision of the measurement, e.g., of the gain of a gaseous detector, is a key element in ageing studies: it allows to perform the measurement at realistic rates in an acceptable time span. It is well known the accelerating ageing employing high intensity sources might produce misleading results. The primary objective is to build an apparatus which allows very accurate measurements and is thus sensitive to minute degradations in detector performance. The construction and commissioning of the
setup has been carried out in two steps. During the first step of this work, a simpler setup which already existed in the detector laboratory of GSI had been utilised to define all conditions related to ageing studies. The outcome of these studies defined the properties and characteristics that must be met to build and operate a new, sophisticated and precise setup. The already existing setup consisted of two identical Multi Wire Proportional Chambers (MWPCs), a gas mixing station, an 55Fe source, an x-ray generator, an outgassing box and stainless steel tubing. In a first step, the gain and electric field configuration of the MWPCs were simulated by a combination of a gas simulation (Magboltz) and electric field simulation program (Garfield). The performance and operating conditions of the chambers have been thoroughly characterised before utilising them in first preparatory ageing test. The main diagnostic parameter in ageing studies is the detector gain, thus it is mandatory for precise ageing studies to minimise the systematic and statistical variation of the pressure and temperature corrected gain. To achieve the required accuracy, several improvements of the chamber design and the gas system have been implemented. In addition, the temperature measurement has been optimised. During the preparatory tests, several ageing studies have been carried out. The ageing effect of seven materials and gases have been carried out during these tests: RTV-3145, Ar/CO2 gas, Durostone flushed with Ar/Isobutane gas, Vetronit G11, Vetronit G11 contaminated with Micro 3000 and Gerband 705. The results of these studies went into the design of the new sophisticated ageing setup. For example some tests revealed that there was, even after cleaning, a certain level of contamination with "ageing agents" in the existing setup, which made it imperative to ensure a very high level cleanness of all components during the construction of the setup. The curing period of some testing samples like glues or the gas flow rate were found to be very important factors that must be taken into account to obtain comparable results. Very important changes in the chamber design have been made, i.e., the aluminium-Kapton cathodes used in MWPCs have been replaced with multi-wire planes and the fibreglass housing of the chamber has been changed to metal. The second step started with building the new setup which was designed based on the findings from the first step. The new ageing setup consists of three MWPCs, two moving platforms, an 55Fe source, a copper-anode x-ray generator, two outgassing boxes, both flexible and rigid stainless steel tubes. Before fabrication of the chambers, simulations of their electric field and the gain have been done using Magboltz and Garfield programs. After that, the chambers were installed and tested. A 0.3% peak-to-peak residual variation of the corrected gain has been achieved. Finally, the complete setup has been operated with full functionality in no-ageing conditions during one week. This test revealed very stable gain in all three chambers. After that two materials (Gerban 705 and RTV-3145) have been inserted in the two outgassing boxes and tested. They revealed an ageing rate of about 0.3%/mC/cm and 3%/mC/cm respectively. The final test proves the stability and accuracy of the ageing measurements carried out with the ageing setup at the detector laboratory at GSI which is ready to conduct the envisaged systematic ageing studies.
In der Experimentierhalle der Physik am Campus Riedberg der Goethe – Universität wird gegenwärtig die Beschleunigeranlage FRANZ aufgebaut. FRANZ steht für Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum. Die Anlage bietet vielfältige Experimentiermöglichkeiten in der Untersuchung intensiver, gepulster Protonenstrahlen. Ein Forschungsschwerpunkt an den sekundären Neutronenstrahlen sind Messungen zur nuklearen
Astrophysik. Die Neutronen werden durch einen 2 MeV Protonenstrahl mittels der Reaktion 7Li (p, n) 7Be erzeugt. Die geplanten Experimente erfordern sowohl eine hier weltweit erstmals realisierte Pulsrepetitionsrate von bis zu 250 kHz bei Pulsströmen im 100 mA – Bereich als auch eine extreme Pulskompression auf eine Nanosekunde bei dann auftretenden Pulsströmen im Ampere – Bereich. Daneben ist auch ein Dauerstrich – Strahlbetrieb im mA – Strombereich möglich. Auch viele einzelne Beschleunigerkomponenten wie die Ionenquelle, der Chopper zur Pulsformung, die hochfrequent gekoppelte RFQ-IH-Kombination, der Rebuncher in Form einer CH – Struktur und der Bunchkompressor sind Neuentwicklungen. Mittlere Strahlleistungen von bis zu 24 kW treten im Niederenergiestrahltransportbereich auf, da die Ionenquelle grundsätzlich im Dauerstrich zu betreiben ist, auch bei Hochstrom mit hohen Pulsrepetitionsraten. Der Personen- und Geräteschutz spielt damit auch eine wesentliche Rolle bei der Auslegung des Kontrollsystems für FRANZ. Der Aufbau von FRANZ und seine wesentlichen Komponenten werden in Kapitel 2 erläutert. Die vielen unterschiedlichen Komponenten wie Hochspannungsbereich, Magneten, Hochfrequenzbauteile und Kavitäten, Vakuumbauteile, Strahldiagnose und Detektoren machen plausibel, dass auch das Kontrollsystem für eine solche Anlage speziell ausgelegt werden muss. In Kapitel 4 werden zum Vergleich die Konzepte zur Steuerung und Regelung aktueller, großer Beschleunigerprojekte aufgezeigt, nämlich für die „European Spallation Source ESS“ und für die „Facility for Antiproton and Ion Research FAIR“. In der vorliegenden Arbeit wurde die Ionenquelle als komplexe Beschleunigerkomponente ausgewählt, um Entwicklungen zur Steuerung und Regelung durchzuführen und zu testen. Zum Anfahren und Betreiben der Ionenquelle wurde ein Flussdiagramm (Abb. 5.15) entwickelt und realisiert. Im Detail wurden Untersuchungen zur Abhängigkeit der Heizkathodenparameter von der Betriebsdauer gemacht. Daraus konnte ein Algorithmus zur Vorhersage eines rechtzeitigen Filamentaustausches abgeleitet werden. Weiterhin konnte die Nachregelung des Kathodenheizstromes automatisiert werden, um damit die Bogenentladungsspannung innerhalb eines Intervalls von ± 0.5 V zu stabilisieren. Das Anfahren des Filamentstroms wurde ebenfalls automatisiert. Dazu wird die Vakuumdruckänderung in Abhängigkeit der Filamentstromerhöhung gemessen, ausgewertet und daraus der nächste erlaubte Stromerhöhungsschritt abgeleitet. Auf diese Weise wird der Betriebszustand schneller und kontrollierter erreicht als bei manuellem Hochfahren. Das Ziel eines unbemannten Ionenquellenbetriebs ist damit näher gerückt. In einem ersten Test zur Komponentensteuerung und zur Datenaufnahme wurde ein Ionenstrahl extrahiert und durch den ersten Fokussierungsmagneten – einen Solenoiden – transportiert. Es wurde der Erregungsstrom des Solenoiden sowie die Strahlenergie automatisch durchgefahren, die Daten abgespeichert und daraus ein Kontourplot zum gemessenen Strahlstrom hinter der Fokussierlinse erstellt (Abb. 5). Die vorliegende Arbeit beschäftigt sich nur mit den „langsamen“ Steuerungs- und Regelungsprozessen, während die schnellen Prozesse im Hochfrequenzregelungssystem unabhängig geregelt werden. Neben der Überwachung des Betriebszustandes aller Komponenten werden auch alle für den Service und die Personensicherheit benötigten Daten weggeschrieben. Das System basiert auf MNDACS (Mesh Networked Data Acquisition and Control System) und ist in JAVA geschrieben. MNDACS besteht aus einem Kernel, welcher die Komponententreiber-Software sowie den Netzwerkserver und das graphische Netzwerkinterface (GUI) betreibt. Weterhin gehört dazu das Driver Abstraction Layer (DAL), welches den Zugang zu weiteren Computern oder zu lokalen Treibern ermöglicht. CORBA stellt die Middleware für Netzwerkkommunikation dar. Dadurch wird Kommunikation mit externer Software geregelt, weiterhin wird die Umlegung von Kommunikation im Fall von Leitungsunterbrechungen oder einem lokalen Computerabsturz festgelegt. Es gibt bei FRANZ zwei Kontrollebenen: Über Ethernet läuft die „High Level Control“ und die Datenverarbeitung. Über die „Low Level Control“ läuft das Interlock – und Sicherheitssystem. Die Netzwerkverbindungen laufen über 1 Gb Ethernet Links, womit ein schneller Austausch auch bei lokalen Netzwerkstörungen noch möglich ist. Um bei Stromausfällen das Computersystem am Laufen zu halten, wurde im Rahmen dieser Arbeit ein „Uninterruptable Power Supply“ UPS beschafft und erfolgreich am Hochspannungsterminal getestet.
The subject of this thesis aimed at a better understanding of the spectacular X-ray burst. The most likely astrophysical site is a very dense neutron star, which accretes H/He-rich matter from a close companion. While falling towards the neutron star, the matter is heated up and a thermonuclear runaway is ignited. The exact description of this process is dominated by the properties of a few proton-rich radioactive isotopes, which have a low interaction probability, hence a high abundance.
The topic of this thesis was therefore an investigation of the short-lived, proton-rich isotopes 31Cl and 32Ar. The Coulomb dissociation method is the modern technique of choice. Excitations with energies up to 20 MeV can be induced by the Lorentz contracted Coulomb field of a lead target. At the GSI Helmholtzzentrum für Schwerionenforschung GmbH in Darmstadt, Germany, a Ar beam was accelerated to an energy of 825 AMeV and fragmented in a beryllium target. The fragment separator was used to select the desired isotopes with a remaining energy of 650 AMeV. They were subsequently directed onto a 208 Pb target in the ALAND/LAND setup. The measurement was performed in inverse kinematics. All reaction products were detected and inclusive and exclusive measurements of the respective Coulomb dissociation cross sections were possible.
During the analysis of the experiment, it was possible to extract the energy-differential excitation spectrum of 31Cl, and to constrain astrophysically important parameters for the time-reversed 30S(p,γ)31Cl reaction. A single resonance at 0.443(37) MeV dominates the stellar reaction rate, which was also deduced and compared to previous calculations.
The integrated Coulomb dissociation cross section of this resonance was determined to 15(6) mb. The astrophysically important one- and two-proton emission channels were analyzed for 32Ar and energy-differential excitation spectra could be derived. The integrated Coulomb dissociation cross section for two proton emission were determined with two different techniques. The inclusive measurement yields a cross section of 214(29stat)(20sys) mb, whereas the exclusive reconstruction results in a cross section of 226(14stat)(23sys) mb. Both results are in very good agreement. The Coulomb dissociation cross section for the one-proton emission channel is extracted solely from the exclusive measurement and is 54(8stat)(6sys) mb.
Furthermore, the development of the Low Energy Neutron detector Array (LENA) for the upcoming R3B setup is described. The detector will be utilized in charge-exchange reactions to detect the low-energy recoil neutrons from (p,n)-type reactions. These reaction studies are of particular importance in the astrophysical context and can be used to constrain half lifes under stellar conditions. In the frame of this work, prototypes of the detector were built and successfully commissioned in several international laboratories.
The analysis was supported by detailed simulations of the detection characteristics.
Crystal growth and characterization of cerium- and ytterbium-based quantum critical materials
(2018)
In der Festkörperphysik werden heutzutage Themen wie Supraleitung, Magnetismus und Quantenkritikalität sowohl von experimenteller als auch von theoretischer Seite stark untersucht. Quantenkritikalität und Quantenphasenübergänge können in Systemen erforscht werden, für welche ein Kontroll Parameter existiert, durch den z.B. eine magnetische Ordnung soweit unterdrückt wird, bis der Phasenübergang bei Null Kelvin, bei einem quantenkritischen Punkt (QCP), stattfindet. Vorzugsweise wird quantenkritisches Verhalten an Einkristallen untersucht, da diese in sehr reiner Qualität gezüchtet werden können und da deren gemessenen physikalischen Eigenschaften ausschließlich intrinsisch sind und nicht durch Verunreinigungseffekte überlagert werden. Der Schwerpunkt dieser Arbeit lag auf der Züchtung von Einkristallen und der Charakterisierung von Materialien, die quantenkritische Phänomene aufweisen. Als Ausgangsstoffe dienten dabei Elemente höchstmöglicher Reinheit. Es wurden die Serie YbNi4(P1-xAsx)2 mit einem ferromagnetischen QCP bei x=0,1, die Verbindung YbRh2Si2 mit einem feldinduzierten QCP bei Bcrit = 60mT und die Serie Ce(Ru1-xFex)PO mit einem QCP bei x = 0,86 untersucht. Für alle Verbindungen wurde das Züchtungsverfahren entwickelt, dann wurden Einkristalle gezüchtet und charakterisiert. Die Züchtung wurde zum einen mittels der Bridgman-Methode, zum anderen mit der Czochralski Methode durchgeführt. Neben struktureller und chemischer Charakterisierung der Einkristalle mittels Röntgen-Pulverdiffraktometrie, Laue-Methode und Energie-dispersiver Röntgen-Spektroskopie, wurden auch deren spezifische Wärme, elektrischer Widerstand und Magnetisierung im Temperaturbereich 1,8 – 300 K untersucht. Im weiteren Verlauf wurden die Kristalle in verschiedenen Kooperationen untersucht und bis in den Tieftemperatur- Bereich (20 mK), bei YbRh2Si2 bis in den Submillikelvin-Bereich, charakterisiert. Ausserdem wurden im Rahmen dieser Dissertation Einkristalle weiterer antiferromagnetischer Verbindungen SmRh2Si2, GdRh2Si2, GdIr2Si2, HoRh2Si2 und HoIr2Si2 gezüchtet. Bei diesen Verbindungen stand die Untersuchung elektronischer Oberflächenzustände mittels winkelaufgelöster Photoemissionsspektroskopie im Vordergrund.
The study of systems whose properties are governed by electronic correlations is a corner stone of modern solid-state physics. Often, such systems feature unique and distinct properties like Mott metal-insulator transitions, rich phase diagrams, and high sensitivity to subtle changes in the applied conditions. Whereas the standard approach to electronic structure calculations, density functional theory (DFT), is able to address the complexity of real-world materials but is known to have serious limitations in the description of correlations, the dynamical mean-field theory (DMFT) has become an established method for the treatment of correlated fermions, first on the level of minimal models and later in combination with DFT, termed LDA+DMFT.
This thesis presents theoretical calculations on different materials exhibiting correlated physics, where we aim at covering a range in terms of systems --from rather weakly correlated to strongy correlated-- as well as in terms of methods, from DFT calculations to combined LDA+DMFT calculations. We begin with a study on a selection of iron pnictides, a recently discovered family of high-temperature superconductors with varying degree of correlation strength, and show that their magnetic and optical properties can be assessed to some degree within DFT, despite the correlated nature of these systems. Next, extending our analysis to the inclusion of correlations in the framework of LDA+DMFT, we discuss the electronic structure of the iron pnictide LiFeAs which we find to be well described by Fermi liquid theory with regard to many of its properties, yet we see distinct changes in its Fermi surface upon inclusion of correlations. We continue the study of low-energy properties and specifically Fermi surfaces on two more iron pnictides, LaFePO and LiFeP, and predict a topology change of their Fermi surfaces due to the effect of correlations, with possible implications for their superconducting properties. In our last study, we close the circle by presenting LDA+DMFT calculations on an organic molecular crystal on the verge of a Mott metal-insulator transition; there, we find the spectral and optical properties to display signatures of strong electronic correlations beyond Fermi liquid theory.
Für die vorliegende Arbeit wurden zur Analyse des Auger-Zerfalls kleiner Moleküle nach Photoionisation die aus der Zerfallsreaktion resultierenden Impuls- und Energiespektren von Photo- und Auger-Elektronen in Koinzidenz mit denen der ionischen Fragmente aufgenommen. Dies ermöglichte eine getrennte Betrachtung der während des Ionisationsschrittes und des Zerfallsschrittes dieses Prozesses besetzten Molekülzustände. Um weitere Einsicht in die Dynamik des Zerfalls zu erhalten, wurden vorhandene theoretische Modelle, welche insbesondere die Interaktion der durch die Reaktion produzierten geladenen Teilchen (Post Collision Interaction) einbeziehen, an die gemessenen Energiespektren angepasst. Dies ermöglichte die separate Betrachtung der im Ionisationsschritt besetzten Molekülzustände. So konnten die Emissionswinkelverteilungen der Photoelektronen im molekülfesten Koordinatensystem für jeden besetzten Anfangszustand einzeln betrachtet werden. Die Trennung der Endzustände des Zerfalls erfolgte über die Analyse des Spektrums der Ionen-Aufbruchsenergie (Kinetic Energy Release) und den Vergleich mit berechneten Potentialkurven der beitragenden Endzustände.
Durch die nach den Anfangszuständen separierte Betrachtung des Auger-Zerfalls wurde es auch möglich, die Auswirkungen dieser Zustände auf die Zerfallsdynamik zu analysieren. Dafür lieferte die Anpassung der Modellprofile die Lebensdauer des jeweiligen 1s-Lochzustandes in dem entsprechenden Zerfallskanal. Diese jeweiligen Lebensdauern eines jeden Zustandes wurden abhängig von verschiedenen Parametern mit einer Genauigkeit im Attosekunden-Bereich aus den Energiespektren der Photoelektronen ermittelt.
Zur effizienten Beschleunigung von Ionen wird meist nach deren Erzeugung in einer Ionenquelle ein Radio Frequenz Quadrupol verwendet. Die vorliegende Dissertation befasst sich mit Entwicklung, Bau und Messung des Prototyps eines neuartigen Leiter-RFQs, der bei 325 MHz betrieben wird. Der Leiter-RFQ verfügt über ein neuartiges mechanisches Design und versucht die Vorteile der beiden vorrangig im Betrieb befindlichen RFQ Typen, des 4-Rod und 4-Vane RFQs, zu verbinden. Die physikalischen Parameter sind der Spezifikation des RFQs für den geplanten Protonenlinac (p-Linac) am FAIR-Projekt an der GSI Darmstadt entnommen. Darüber hinaus wird der aktuelle Planungs- und Simulationsstand eines modulierten Prototyps mit der vollen Länge von ca. 3,5 m zur Durchführung von Strahltests dargestellt.
The PhD addresses the feasibility of reconstructing open charm mesons with the Compressed Baryonic Matter experiment, which will be installed at the FAIR accelerator complex at Darmstadt/Germany. The measurements will be carried out by means of a dedicated Micro Vertex Detector (MVD), which will be equipped with CMOS Monolithic Active Pixel Sensors (MAPS). The feasibility of reconstructing the particles with a proposed detector setup was studied.
To obtain conclusive results, the properties of a MAPS prototype were measured in a beam test at the CERN-SPS accelerator. Based on the results achieved, a dedicated simulation software for the sensors was developed and implemented into the software framework of CBM (CBMRoot). Simulations on the reconstruction of D0-mesons were carried out. It is concluded that the reconstruction of those particles is possible.
The PhD introduces the physics motivation of doing open charm measurements, represents the results of the measurements of MAPS and introduces the innovative simulation model for those sensors as much as the concept and results of simulations of the D0 reconstruction.
Der Radiofrequenzquadrupol (RFQ) wird typischerweise als erstes beschleunigendes Element in Beschleunigeranlagen eingesetzt. Das elektrische Quadrupolfeld ermöglicht die gleichzeitige Fokussierung und Beschleunigung des Ionenstrahls. Zudem ist der RFQ in der Lage den Gleichstromstrahl von der Ionenquelle zu Teilchenpaketen (Bunche) zu formen, die von den nachfolgenden Driftröhrenbeschleunigern benötigt werden. Ziel der vorliegenden Arbeit war die Untersuchung zur Realisierbarkeit eines 325 MHz 4-rod RFQ Beschleunigers. Die Frequenz von 325 MHz stellt eine ungewöhnlich hohe Betriebsfrequenz für die 4-rod Struktur dar und wird z.B. für den Protonenlinac des FAIR Projektes benötigt. Ein Problem hierbei war, dass durch die bauartbedingten unsymmetrischen Elektrodenaufhängung und der hohen Frequenz ein, das Quadrupolfeld überlagerndes, Dipolfeld erzeugt wird. Dieses störende Feld kann z.B. zu einem Versatz der Strahlachse führen. Hierzu wurde die 4-rod Struktur in Simulationen grundlegend auf Einflüsse von verschiedenen Parametern auf die Resonanzfrequenz und das Dipolfeld untersucht. Es wurden Lösungsstrategien erarbeitet das Diopolfeld zu kompensieren und auf einen Prototypen angewendet. Zudem wurde das Verhalten höherer Schwingungsmoden dieser Struktur simuliert. In diesem Rahmen wurden auch Simulationen zu Randfeldern zwischen den 4-rod Elektroden und der Tankwand untersucht, um nachteilige Effekte für die Strahlqualität auszuschließen. Basierend auf den Simulationsergebnissen wurde ein Prototyp angefertigt. Dieser Prototyp wurde zur Demonstration der Betriebseigenschaften mit Leistungen bis 40 kW getestet. Hierbei wurde die Elektrodenspannung mittels Gammaspektroskopie bestimmt und daraus die Shuntimpedanz berechnet. Diese Werte wurden mit anderen Methoden der Shuntimpedanzbes- timmung verglichen. Außerdem wurden alternative RFQ Resonatorkonzepte ebenfalls auf ihre Realisierbarkeit für den Protonenlinac untersucht. Die Einflüsse verschiedener Parameter auf die Betriebsfrequenz, die Möglichkeiten des Frequenztunings und der Einstellung der longitudinalen Spannungsverteilung gefertigter Modelle wurden in einer Diskussion gegenübergestellt.
This dissertation presents the development of a new radio frequency quadrupole (RFQ) structure of the 4-rod type with an operating frequency of 108 MHz for the acceleration of heavy ions with mass-to-charge ratios of up to 8.5 at high duty cycles up to CW operation ("continuous wave") at the High Charge Injector (HLI) of the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt.
The need to develop a completely new RFQ for the HLI arises from the fact that with the previously designed and built 4-rod RFQ structure, which was commissioned at the HLI in 2010 as part of the planned HLI upgrade program, the desired operating modes in both pulsed and CW operation could not be achieved even after several years of operating experience and considerable efforts to eliminate or at least mitigate the severe operational instabilities. Mechanical vibrations of the electrodes, which result in strong modulated power reflection, as well as the high thermal sensitivity proved to be particularly problematic.
In addition to the RF design of the new RFQ by simulations performed with the CST Microwave Studio software, the focus of the investigations fell on the mechanical analysis of vibrations on the electrode rods caused by RF operation, for which the ANSYS Workbench software was used. Due to the high thermal load of the RFQ structure of more than 30 kW/m in CW operation, an accurate analysis of the thermal effects on electrode deformation as well as resulting frequency detuning of the resonator is also required, which was investigated by simulations within the capabilities of CST Mphysics Studio.
Based on the results of the design studies carried out by simulations and the thereby achieved design optimizations, a 4-rod RFQ prototype with 6 stems was finally manufactured, on which most of the properties expected from the simulations could be validated by measurements of the RF characteristics as well as of the vibration behavior.
Finally, based on the results of the pre-tests and considering a newly developed beam dynamics concept, a completely revised RF design for a new full-length HLI-RFQ was derived from the prototype design.
Im Rahmen dieser Arbeit wurde ein verbessertes Buncher-System für Hochfrequenzbeschleuniger mit niedrigem und mittlerem Ionenstrom entwickelt. Die entwickelte Methodik hat ermöglicht, ein effektives, vereinfachtes Buncher-System für die Injektion in HF-Beschleuniger wie RFQs, Zyklotrons, DTLs usw. zu entwerfen, welches kleine Ausgangsemittanzen und beträchtliche Strahltransmissionen erzielt. Um einen mono-energetischen und kontinuierlichen Strahl aus einer Ionenquelle für den Einschuss in eine Hochfrequenz-Beschleunigerstruktur anzupassen, wird eine Energiemodulation benötigt, die im weiteren Verlauf (Driftstrecke) zur Längsfokussierung des Strahls führt. Durch eine Sägezahnwellenform wird die ideale Energiemodulation aufgrund der linearen Abhängigkeit zwischen der Energie der Teilchen und ihren relativen Phasen erreicht. Dies ist jedoch technologisch nicht möglich, da Teilchenbeschleuniger Spannungsniveaus im Bereich kV bis 100 kV benötigen. Dagegen ist für eine solche Zielsetzung eine räumliche Trennung der sinusförmigen Anregung mit der Grundfrequenz und höheren Harmonischen möglich.
Daher wurde in dieser Arbeit ein verbesserter harmonischer Buncher, der sogenannte „Double Drift Harmonic Buncher - DDHB“ entwickelt, welcher zahlreiche Vorteile hat. Eine geringe longitudinale Emittanz sowie finanzielle Aspekte sprechen für diesen Lösungsansatz. Die Hauptelemente eines DDHB Systems sind zwei Kavitäten, die durch eine Driftlänge L1 getrennt sind, wobei der erste Resonator mit der Grundfrequenz bei -90° synchroner Phase und angelegter Spannung V1 und der zweite Resonator bei der zweiten harmonischen Frequenz mit +90 synchroner Phase und angelegter Spannung V2 betrieben werden. Schließlich ist eine zweite Drift L2 am Ende des Arrays für eine longitudinale Strahlfokussierung am Hauptbeschleunigereingang erforderlich. Somit erfüllt ein solcher Aufbau das angestrebte Ziel einer hohen Einfangseffizienz und einer kleinen longitudinalen Emittanz durch Anpassen der vier Designparameter V1, L1, V2 und L2.
Das Verständnis der Fokussierung, ausgehend von einem Gleichstromstrahl, einschließlich der Raumladungskräfte, ist einer der wesentlichen Bestandteile der Strahlphysik. Viele kommerzielle Codes bieten Simulationsmöglichkeiten in diesem Anwendungsbereich. Ihre Ansätze bleiben jedoch dem Anwender meist verborgen, oder es fehlen wichtige Details zur genauen Abbildung des vorliegenden Konzepts. Daher bestand eine Hauptaufgabe dieser Arbeit darin, einen speziellen Multi-Particle-Tracking-Beam-Dynamics-Code (BCDC) zu entwickeln, bei dem der Raumladungseffekt während des Bunch-Vorgangs, ausgehend von einem DC-Strahl berechnet wird. Der BCDC - Code enthält elementare Routinen wie Drift und Beschleunigungsspalt oder magnetische Linse für die transversale Strahlfokussierung und Raumladungsberechnungen unter Berücksichtigung der Auswirkungen der nächsten Nachbar-Bunche (NNB). Der Raumladungsalgorithmus in BCDC basiert auf einer direkten Coulomb- Gitter-Gitter-Wechselwirkung und Berechnungen des elektrischen Feldes durch Lokalisierung der Ladungsdichte auf einem kartesischen Gitter. Um Genauigkeit zu erreichen, werden die Feldberechnungen in Längsrichtung symmetrisch um das zentrale Bucket (βλ-Größe) erweitert, so dass das Simulationsfeld dreimal so groß ist. Die zentrale Teilchenverteilung wird dann nach jedem Schritt in die benachbarten Buckets kopiert. Anschließend werden die resultierenden Felder im Hauptgitterfeld neu berechnet, indem die elektrischen Felder im Hauptgitterfeld mit denen aus den benachbarten Regionen überlagert werden. Ohne diese Methode würde z. B. ein kontinuierlicher Strahl, welcher jedoch in der Simulation nur innerhalb einer Zelle der Länge βλ definiert ist, zu einer resultierenden Raumladungsfeldkomponente Ez an beiden Rändern der Zelle führen. Ein solches unphysikalisches Ergebnis konnte durch die Anwendung der NNB-Technik bereits weitgehend eliminiert werden. Zusätzlich zum NNB-Feature verfügt das BCDC über eine weitere Besonderheit nämlich die sogenannte Raumladungskompensation (SCC). Aufgrund der Ionisierung des Restgases kommt es entlang des Niederenergiestrahltransports zu einer teilweisen Raumladungskompensation, und zwar am und hinter dem Bunchersystem mit unterschiedlichen Prozentsätzen. Eines der Hauptziele des DDHB-Konzepts besteht darin, es für Hochstromstrahlanwendungen zu entwickeln. Dabei ermöglicht die teilweise Raumladungskompensation, dass das Design in der Praxis höhere Stromniveaus erreicht. Dadurch ist das BCDC-Programm ein leistungsstarkes Werkzeug für Simulationen in künftigen, stromstarken Projekten. Proof-of-Principle-Designs wurden in dieser Arbeit entwickelt.
This Dissertation deals with the development of FAIR-relevant X-ray diagnostics based on the interaction of lasers and particle beams with matter. The associated experimental methods are supposed to be employed in the HIHEX-experiments in the HHT-cave of the GSI Helmholtz Center for Heavy-Ion Research GmbH (GSI) in Phase-0 and in the APPA-cave at the Facility for Antiproton and Ion Research in Darmstadt, Germany.
Diagnostic of high aerial density targets that will be used in FAIR experiments demands intense and highly penetrating X-ray sources. Laser generated well-directe relativistic electron beams that interact with high Z materials is an excellent tool for generation of short-pulse high luminous sources of MeV-gammas.
In pilot experiments carried out at the PHELIX laser system, GSI Darmstadt, relativistic electrons were produced in a long scale plasma of near critical electron density (NCD) by the mechanism of the direct laser acceleration (DLA). Low density polymer foam layers preionised by a well-defined nanosecond laser pulse were used as NCD targets. The analysis of the measured electron spectra showed up to 10- fold increase of the electron "temperature" from T_Hot = 1–2 MeV, measured for the case of the interaction of 1–2 ×10^19 Wcm^(−2) ps-laser pulse with a planar foil, up to 14 MeV for the case when the relativistic laser pulse propagates through the by a ns-pulse preionised foam layer. In this case, up to 80–90 MeV electron energy was registered. An increase of the electron energy was accompanied by a strong increase of the number of relativistic electrons and well-defined directionality of the relativistic electron beam measured to be (12 ±1)° (FWHM). This directionality increases the gamma flux on target by far compared to the soft X-ray sources.
Additionally to laser based active diagnostics, passive techniques involving inherent X-ray fluorescence radiation of projectile and target emitted during heavy-ion target interaction can be used to measure the ion beam distribution on shot. This information is of great importance, since the target size is chosen to be smaller than the beam focus in order to ensure homogeneous heating of the HIHEX-target by the ion beam. High amounts of parasitic radiation and activation of experimental equipment is expected for experiments at the APPA-cave. For this reason, all electronic devices must be placed at a safe distance to the target chamber. In order to transport the signal over a large distance, the X-ray image of the target irradiated by heavy-ions has to be converted into an optical one.
For these purposes, the X-ray Conversion to Optical radiation and Transport (XCOT)-system was developed in the frame of a BMBF-project and commissioned in two beamtimes at the UNILAC, GSI during this work.
In experiments, we observed intense radiation of target atoms (K-shell transitions in Cu at 8–8.3 keV and L-shell transition in Ta) ionised in collisions with heavy ions as well as Doppler-shifted L-shell transitions of Au-projectiles passing through targets. This radiation can be used for monochromatic (dispersive elements like bent crystals) or polychromatic (pinhole) 2D X-ray mapping of the ion beam intensity distribution in the interaction region during the beam-target interaction. We measured the efficiency of the X-ray photon production depending on the target thickness and the number of ions passing through the target. The spatial resolution of the XCOT-system based on the multi-pinhole camera was measured to be (91±17) μm for the image magnification factor M = 2. It was considerably improved by application of a toroidally bent quartz crystal and reached 30 μm at M = 6. This resolution is optimal to image the distribution of a 1mm in diameter ion beam. As next step, the XCOT-system will be tested during the SIS18 beam-time at the HHT-experimental area.
The laser-driven acceleration of protons from thin foils irradiated by hollow high-intensity laser beams in the regime of target normal sheath acceleration is reported for the first time. The use of hollow beams aims at reducing the initial emission solid angle of the TNSA source, due to a flattening of the electron sheath at the target rear side. The experiments were conducted at the PHELIX laser facility at the GSI Helmholtzzentrum für Schwerionenforschung GmbH with laser intensities in the range from 10^18 to 10^20 W/cm^2. We observed an average reduction of the half opening angle by (3.07±0.42)° or (13.2±2)% when the targets have a thickness between 12 to 14 μm. In addition, the highest proton energies were achieved with the hollow laser beam in comparison to the typical Gaussian focal spot.
Heterodyne array receivers are employed in radio astronomy to reduce the observing time needed for mapping extended sources. One of the main factors limiting the amount of pixels in terahertz receivers is the difficulty of generating a sufficient amount of local oscillator power. Another challenge is efficient diplexing and coupling of local oscillator and signal power to the detectors. These problems are attacked in this dissertation by proposing the application of two vacuum electronic terahertz amplifier types for the amplification of the LO-signal and by introducing a new method for finding the defects in a quasioptical diplexer.
A traveling wave tube (TWT) design based on a square helix slow wave structure (SWS) at 825 GHz is introduced. It exhibits a simulated small-signal gain of 18.3 dB and a 3-dB bandwidth of 69 GHz. In order to generate LO-power at even higher frequencies, the operation of an 850-GHz square helix TWT as a frequency doubler has been studied. A simulated conversion efficiency of 7% to 1700 GHz, comparable with the state-of-art solid-state doublers, has been achieved for an input power of 25 mW.
The other amplifier type discussed in this work is a 1-THz cascade backward wave amplifier based on a double corrugated waveguide SWS. Specifically, three input/output coupler types between a rectangular waveguide and the SWS are presented. The structures have been realized with microfabrication, and the results of loss measurements at 1 THz will be shown.
Diplexing of the LO- and signal beams is often performed with a Martin-Puplett interferometer. Misalignment and deformation of the quasioptical components causes the polarization state of the output signal to be incorrect, which leads to coupling losses. A ray-tracing program has been developed for studying the influence of such defects. The measurement results of the diplexer of a multi-pixel terahertz receiver operated at the APEX telescope have been analyzed with the program, and the results are presented. The program allows the quasioptical configuration of the diplexer to be corrected in order to obtain higher receiver sensitivity.
The Compressed Baryonic Matter (CBM) Experiment will investigate heavy ion collisions and reactions at interaction rates of 100 kHz in a targeted energy range of up to 11 AGeV for systems such as gold-gold or lead-lead. It will be one of the major scientific experiments of the Facility for Antiproton and Ion Research in Europe (FAIR) currently under construction at the site of the GSI Helmholtzzentrum für Schwerionenforschung (GSI) in Darmstadt, Germany. CBM is going to be a fixed target experiment consisting of a superconducting magnet, multiple detectors of various types, and high-performance computing for online event reconstruction and selection. The detector closest to the interaction point of the experiment will be the Micro Vertex Detector (MVD). Consisting of four planar stations equipped with custom CMOS pixel sensors, it will allow to reconstruct the primary vertex with high precision and will help to reconstruct secondary vertices and identify particles originating from conversion in the detector material.
Due to the high interaction rates foreseen for CBM, understanding and minimizing systematic errors due to the detectors’ operating conditions will become all the more important to obtain significant measurement results, as statistical errors in the measurements of many observables are diminishing due to the enormous amount of data available.
Furthermore, the MVD will be the first detector based on CMOS pixel sensors used in a large physics experiment, that will be operated in vacuum. As a result, many aspects of the mechanical and electrical integration of the detector require careful testing and validation.
This thesis addresses both those challenges specifically for the Micro Vertex Detector with the development of a control system for the operation and validation of the MVD prototype “PRESTO” in vacuum. The prototype was selected as device under test as the final MVD is not yet built.
The developed control system helps a) to operate the prototype safely and keep it at the desired working point and b) to record important time-series data of the state of the detector prototype. Those two aspects allow the control system (which might later serve as a ‘blueprint’ for the final detector) to minimize the mentioned systematic errors as much as possible and to contribute to the understanding of remaining systematic errors using correlations with the time-series data. The controlled operation of the prototype in vacuum allowed to validate the integration concepts from a wide range of mechanical and electrical aspects in an endurance test for more than a year with 24/7 operation.
The prototype for this study itself was named “PRESTO” (standing for ‘PREcursor of the Second sTatiOn of the CBM-MVD’). It represents one quadrant of an MVD detector plane, equipped with a total of 15 MIMOSA-26 sensors on the front and back side of a carrier plate. Within this thesis, major parts of the prototype itself were designed. Custom ultra-thin flat flexible cables for data and power were designed and validated. Furthermore, the CNC-machined Aluminium heatsink to mount and cool the prototype design was refined to increase thermal performance. A custom vacuum feedthrough for a total of 21 flat ribbon cables was designed and fabricated. The read-out chain for MIMOSIS-26 was extended to cover a total of 8 sensors with a single and newer TRB-3 FPGA board and was set-up with the prototype. Vacuum equipment including chambers, hoses, pumps, valves and gauges were integrated to form a large vacuum testing system. A cooling circuit for the prototype was assembled comprising an external chiller, hoses, vacuum feedthroughs, as well as temperature, flow and pressure sensors.
The control system was developed to serve the needs of the prototype, while taking the requirements of the final MVD already into account. The main design goals of the control system are:
• compatibility with the other detectors and the overall CBM experiment,
• access to real-time measurements of all necessary parameters (‘process values’),
• reliable, fail-safe operation of the detector,
• recording of all time-series data (‘archiving’),
• cost efficiency and acceptance within the physics community,
• good usability for the users (‘operators’),
• long-term maintainability.
The recorded time-series data of the process variables (i.e. sensor readings) allow a post-measurement analysis of variations in the detector performance. The longterm archiving of all relevant system parameters is therefore of outstanding importance, which is why the software intended for this purpose – called “archiver” – was given special attention in this thesis.
For this reason in particular, it is necessary to implement a comprehensive control system that allows the detector to be operated safely under these conditions and cooled effectively. Before the start of this doctoral thesis, vigilant and extensively trained operators were always necessary for this. The control system that has been developed makes it possible that, after basic training, the detector can also be operated by a less specialised shift supervisor during measurement campaigns.
...
The Compressed Baryonic Matter (CBM) is one of the core experiments at the future Facility for Anti-proton and Ion Research (FAIR), Darmstadt, Germany. Its goal is to investigate nuclear matter characteristics at high net-baryon densities and moderate temperatures. The Silicon Tracking System (STS) is a central detector system of CBM.
It is placed inside a 1Tm magnet and operated at a temperature of about −10 °C to keep radiation-induced bulk current in the 300μm double-sided microstrip silicon sensors low. The design of the STS aims to minimize the material budget in the detector acceptance (2.5° < θ < 25°). In order to do so, the readout electronics is placed outside the active area, and the analog signals are transported via ultra-thin micro-cables. The STS comprises eight tracking stations with 876 modules. Each module is assembled on a carbon fiber ladder, which is subsequently mounted in the C-shaped aluminum frame.
The scope of the thesis focused on developing a modular control system framework that can be implemented for different sizes of experimental setups. The developed framework was used for setups that required a remote operation, like the irradiation of the powering modules for the front-end electronics (FEE), but also in laboratory-based setups where the automation and archiving were needed (thermal cycling of the STS electronics).
The low voltage powering modules will be placed in the vicinity of the experiment, therefore they will experience a total dose of up to 40mGy over the 10 years of STS lifetime.
To estimate the effects of the radiation on the low-voltage module performance, a dedicated irradiation campaign took place. It aimed at estimating the rate of radiation induced soft errors, that lead to the switch off of the FEE.
Regular power cycles of multiple front-end boards (FEBs) pose a risk to the experiment operation. Firstly, such behavior could negatively influence the physics performance but also have deteriorating effects on the hardware. It was further assessed what are the limitations of the FEBs with respect to the thermal cycling and the mechanical stress. The results served as an indication of possible failure modes of the FEB at the end of STS lifetime. Failure modes after repeated cycles and potential reasons were determined (e.g., Coefficient of Thermal Expansion (CTE) difference between the materials).
Due to the conditions inside the STS efficient temperature and humidity monitoring and control are required to avoid icing or water condensation on the electronics or silicon sensors. The most important properties of a suitable sensor candidate are resilience to the magnetic field, ionizing radiation tolerance, and fairly small size.
A general strategy for ambient parameters monitoring inside the STS was developed, and potential sensor candidates were chosen. To characterize the chosen relative humidity sensors the developed control framework was introduced. A sampling system with a ceramic sensor and Fiber Optic Sensors (FOS) were identified as reliable solutions for the distributed sensing system. Additionally, the industrial capacitive sensors will be used as a reference during the commissioning.
Two different designs of FOS were tested: a hygrometer and 5 sensors multiplexed in an array. The FOS hygrometer turned out to be a more reliable solution. One of the possible reasons for a worse performance is a relatively low distance between the subsequent sensors (15 cm) and a thicker coating. The results obtained from the time response study pointed out that the thinner coating of about 15μm should be a good compromise between the humidity sensitivity and the time response.
The implementation of the containerized-based control system framework for the mSTS is described in detail. The deployed EPICS-based framework proved to be a reliable solution and ensured the safety of the detector for almost 1.5 years. Moreover, the data related to the performance of the detector modules were analyzed and significant progress in the quality of modules was noted. Obtained data was also used to estimate the total fluence, which was based on the leakage current changes.
The developed framework provided a unique opportunity to automate and control different experimental setups which provided crucial data for the STS. Furthermore, the work underlines the importance of such a system and outlines the next steps toward the realization of a reliable Detector Control System for STS.
The upcoming CBM Experiment at FAIR aims at exploring the region of highest net baryonic densities reproducible in energetic heavy ion collisions. Due to the very high beam intensities expected at FAIR, unprecedented data regarding rare observables such as charm quarks and hyperons will be accessible. Open charm mesons are particularly interesting, since they support the reconstruction of the total charm cross-section in order to search for exotic phenomena, e.g. a phase transition towards the quark-gluon plasma which is predicted by several theoretical models. Open charm studies will be performed via secondary vertex reconstruction with a suitable Micro-Vertex Detector (MVD). The CBM-MVD is currently in the development and prototyping phase with primary design goals concentrating on spatial resolution, radiation hardness, material budget, and readout performance. CMOS Monolithic Active Pixel Sensors (MAPS) provide an excellent spatial resolution for the MVD in the order of few um in combination with a low material budget (50 um thickness) and high radiation hardness. The active volume of the devices is formed from the epitaxial layer of standard CMOS wafers. This allows for integration of pixels together with analogue and digital data processing circuits on one single chip. This option was explored with the MIMOSA-26 prototype, which integrates functionalities like pedestal correction, correlated double sampling, discrimination and data sparsification based on zero suppression combined with a small and dense pixel matrix. The pixel array composed of 576 lines of 1152 pixels is read out in a column-parallel rolling shutter mode. One discriminator per column and the digital data processing circuits are located on the same chip in a 3 mm wide area beneath the pixel matrix allowing for binary hit encoding. This area also contains the circuits for pedestal correction and the configuration memory, which is programmed via JTAG. The preprocessed digital data is read out via two 80 Mbit/s LVDS links per sensor, which stream their data continuously based on a low-level protocol.
Within the scope of this thesis, a readout concept of the CBM-MVD is proposed and studied based on the current MIMOSA sensor generation. The backbone of the system is formed by the Readout Controller boards (ROCs) featuring FPGA microchips and optical links. Several ROC prototypes are considered using the synergy with the HADES Experiment. Finally, the TRB3 board is selected as a possible candidate for the initial FAIR experiments. Furthermore, a highly scalable, hardware independent FPGA firmware is implemented in order to steer and read out multiple MIMOSA-26 sensors. The reconfigurable firmware is also designed with the support for future MIMOSA sensor generations. The free-streaming sensor data is deserialized and error-checked, prior to its transmission over a suitable network interface. In order to demonstrate the validity of the concept, a readout network similar to the HADES Data Acquisition (DAQ) system is developed. The ROC is tested on the HADES TRB2 boards and data is acquired using suitable MAPS add-on boards and the TrbNet protocol.
In the context of the CBM-MVD prototype project, a readout network with 12 MIMOSA-26 sensors has been prepared for an in-beam test at the CERN SPS facility. A comprehensive control system is designed comprising customized software tools. The subsequent in-beam test is used to validate the design choices. As a result, the system could be operated synchronously and dead-time free for several days. The readout network behavior in a realistic operating environment has been carefully studied with the outcome the the TrbNet based approach handles the MVD prototype setup without any difficulties. A procedure to keep the sensors synchronous even in case of a data overflow has been pioneered as well. After the beam test, improvements and conceptual changes to the readout systems are being addressed which allow an integration into the global CBM DAQ system.
Development of the timing system for the Bunch-to-Bucket transfer between the FAIR accelerators
(2017)
The FAIR project is aiming at providing high-energy beams of ions of all elements from hydrogen to uranium, antiprotons and rare isotopes with high intensities. The existing accelerator facility of GSI and the future FAIR facility employ a variety of circular accelerators like heavy ion synchrotrons (SIS18 and SIS100) and storage rings (ESR, CRYRING, CR and HESR) for the preparation of secondary beams and experiments. Bunches are required to be transferred into rf buckets among GSI and FAIR ring accelerators for different purposes. Without the proper transfer, the beam will be subject to various beam quality deterioration and even to beam losses. Hence, the proper bunch-to-bucket (B2B) transfer between two rings is of great importance for FAIR and is the topic, which has been investigated in this thesis.
These circular accelerators of GSI and FAIR have different ratios in their circumference. For example, the circumference ratio between SIS100 and SIS18 is an integer and between SIS18 and ESR is close to an integer and between CR and HESR is far away from an integer. The ring accelerators are connected via a complicated system of beam transfer lines, targets for the secondary particle production and the high energy separators mentioned above. For FAIR, not only the primary beams are required to be transferred from one ring to another, but also the secondary beams, e.g. the antiproton or rare isotope beams produced by the antiproton (pbar) target, the fragment separator (FRS) or the superconducting fragment separator (Super-FRS). An important topic for this system of accelerators is the proper transfer of beam between the different circular accelerators. Bunches of one ring must be transferred into buckets of another ring within an upper bound time constraint (e.g. 10 ms for most FAIR use cases) and with an acceptable B2B injection center mismatch +-1 degree for most FAIR use cases). Hence, a flexible FAIR B2B transfer system is required to realize the different complex B2B transfers between the FAIR rings in the future. In the focus of the system development and of this thesis is the transfer from SIS18 to SIS100, which can be tested at GSI on the transfer from SIS18 to ESR and from ESR to CRYRING. The system is based on the existing technical basis at GSI, the low-level radio frequency (LLRF) system and the FAIR control system. It coordinates with the Machine Protection System (MPS), which protects SIS100 and subsequent accelerators and experiments from damage caused by high intensity primary beams in case of malfunctioning. Besides, it indicates the beam status and the actual beam injection time for the beam instrumentation and diagnostics.
The conceptual realization of the FAIR B2B transfer system was introduced in this thesis for the first time. It achieves the most FAIR B2B transfers with a tolerable B2B injection center mismatch (e.g. +-1 degree) and within an upper bound time (e.g. 10 ms). It supports two synchronization methods, the phase shift and frequency beating methods. It is flexible to support the beam transfer between two rings with different ratios in their circumference and several B2B transfers running at the same time, e.g. the B2B transfer from SIS18 to SIS100 and at the same time the B2B transfer from ESR to CRYRING. It is capable to transfer beam of different ion species from one machine cycle to another and to transfer beams between two rings via the FRS, the pbar target and the Super-FRS. It allows various complex bucket filling pattern. In addition, it coordinates with the MPS system, which protects the SIS100 and subsequent accelerators or experiments from beam induced damage.
A list of criteria for the preservation of beam qualities during the rf frequency modulation of the phase shift method was analyzed. As an example the beam reaction on three different rf frequency modulation examples were analyzed for SIS18 beams. According to the beam dynamic analysis, there is a maximum value for the rf frequency modulation. The first derivative of the rf frequency modulation must be continuous and small enough and the second derivative must be small enough.
In addition to the analysis from the viewpoint of beam dynamics, two test setups were built. The first test setup was used to characterize the FAIR timing network – white rabbit network for the B2B transfer. In the second test setup, the firmware of the FAIR B2B transfer system was evaluated, which was running on the soft CPU, LatticeMico32, of the Scalable Control Unit - the FAIR standard Front End Controller. Besides, the boundary conditions of the different trigger scenarios of the SIS18 extraction and SIS100 injection kicker magnets were investigated. Finally, the application of the FAIR B2B transfer system for all FAIR use cases was demonstrated.
The dissertation plays a significant important role for the realization of the FAIR B2B transfer system and the further practical application of the system to all FAIR use cases.
Der langsame Neutroneneinfang-Prozess (s-Prozess) ist für die Erzeugung von rund der Hälfte der Elemente zwischen Eisen und Blei verantwortlich. Sein Reaktionspfad enthält entlang des Stabilitätstals einige Verzweigungspunkte an instabilen Isotopen, deren Neutroneneinfangquerschnitte die Produktion schwererer Elemente und deren Isotopen-Verhältnisse beeinflussen. Kennt man ihre Zerfalls- und Neutroneneinfangraten unter den angenommenen stellaren Bedingungen ist es möglich, Rückschlüsse auf die physikalischen Umstände während des s-Prozesses zu ziehen. Einer dieser Verzweigungspunkte ist 63-Ni. Die experimentelle Bestimmung des differentiellen Wirkungsquerschnittes für den Neutroneneinfang an diesem Isotop ist das primäre Ergebnis der vorliegenden Arbeit. Der 63-Ni(n,gamma)- Wirkungsquerschnitt hat Einfluss auf die Häufigkeiten von 64-Ni, die Kupfer- und die Zink-Isotope. Die Sensitivität der Produktion dieser Nuklide in s-Prozess-Szenarien wurde ebenfalls im Rahmen dieser Arbeit anhand von Simulationen des entsprechenden Nukleosynthesenetzwerkes untersucht. Zudem wurde die Datenlage für s-Prozess-Modelle mit einer Flugzeit-Messung des 63-Cu(n,gamma)-Wirkungsquerschnitts erweitert.
Die beiden Experimente zur Querschnittsbestimmung von 63-Ni und 63-Cu fanden am Los Alamos Neutron Science Center in New Mexico, USA statt. Eine aus angereichertem 62-Ni hergestellte 63-Ni-Probe wurde im Rahmen einer Flugzeit-Messung gepulst mit Neutronen bestrahlt. Der Nachweis der prompten Gammastrahlung aufgrund von Neutroneneinfängen erfolgte mit dem 4π-BaF_2-Detektor DANCE. Die kalorimetrische Messung macht den Q-Wert der Reaktion für jedes Einfangereignis zugänglich und erlaubt die Unterscheidung von Ereignissen verschiedener Isotope. Es konnte gezeigt werden, dass diese Methode die Bestimmung von Querschnitten selbst mit Proben ermöglicht, die nur zu einem Bruchteil aus dem zu untersuchenden Isotop bestehen. Der 63-Ni(n,gamma)-Wirkungsquerschnitt wurde für den Energiebereich von 40 eV bis 500 keV mit einer maximalen Unsicherheit von 15% bestimmt. Es zeigte sich, dass theoretische Abschätzungen den Querschnitt bislang um etwa einen Faktor 2 unterschätzten. In demselben Energiebereich konnte der 63-Cu(n,gamma)-Wirkungsquerschnitt mit einer maximalen Unsicherheit von 8% vermessen werden.
Das Strahldynamikdesign für den MYRRHA-Injektor wurde im Hinblick auf eine hohe Zuverlässigkeit und Verfügbarkeit, sowie eine verbesserte Strahlausgangsemittanz, neu entwickelt und erfüllt nun die Anforderungen des Kernreaktors.
In der statistischen Fehleranalyse zeigt sich die Strahldynamik der CH-Sektion als äußerst robust und liefert selbst unter pessimistischen Fehlerannahmen eine Transmission von über 99,9 %.
Das neue Injektorkonzept bietet wesentliche Vorteile gegenüber dem in „MAX Referenzdesign 2012“ vorgestellten Injektordesign und wird als neues „MAX Referenzdesign 2014“ für den MYRRHA-Injektor verwendet. Die guten strahldynamischen Eigenschaften des neuen Injektordesigns konnten in Vergleichsrechnungen mit TraceWin am IN2P3@CNRS1 (Institut National de Physique Nucléaire et de Physique des Particules @ Centre National de la Recherche Scientifique, Orsay, Frankreich) bestätigt werden.
Neben der Strahldynamik wurde das HF-Design für die benötigten Beschleunigerkavitäten entwickelt und ebenfalls für eine hohe Zuverlässigkeit und Verfügbarkeit optimiert. Das HF-Design der CH-Strukturen ist für eine größtmögliche Ausfallsicherheit auf den Betrieb mit niedrigen elektrischen Feldgradienten, weit unterhalb der technischen Leistungsgrenzen und Möglichkeiten der jeweiligen Kavität, ausgelegt.