Refine
Year of publication
Document Type
- Doctoral Thesis (596) (remove)
Has Fulltext
- yes (596)
Is part of the Bibliography
- no (596)
Keywords
- Quark-Gluon-Plasma (8)
- Schwerionenphysik (8)
- CERN (5)
- Heavy Ion Collisions (5)
- Ionenstrahl (5)
- LHC (5)
- Monte-Carlo-Simulation (5)
- Quantenchromodynamik (5)
- Schwerionenstoß (5)
- Teilchenbeschleuniger (5)
Institute
- Physik (596) (remove)
The main focus of research in the field of high-energy heavy-ion physics is the study of the quark-gluon plasma (QGP). Topic of the present work is the measurement of electron-positron pairs (dielectrons), which grant direct access to some of the key properties of this state of matter, since after their formation they leave the hot and dense medium without significant interaction. In particular, the measurement of the initial QGP temperature is considered a "holy grail" of heavy-ion physics. Therefore, in addition to the analysis of existing data, a feasibility study has been conducted to determine to which extent this goal would be achievable by upgrading the ALICE experiment at CERN.
Dielectrons are produced during all stages of a heavy-ion collision, with their invariant mass reflecting the amount of energy available at the time of their formation. Dielectrons of highest mass are thus produced in the initial scatterings of the colliding nuclei by quark-antiquark annihilation. Correlated electron-positron pairs can also emerge from the decay chains of early-produced pairs of heavy-flavour (HF) particles. During the QGP stage and at the beginning of the hadronic phase, the system emits thermal radiation in the form of photons and dielectrons, which carry information about the medium temperature to the observer. In the final stage of the collision, decays of light-flavour (LF) hadrons produce additional contributions to the dielectron spectrum.
The present work is based on early data from the ALICE experiment recorded from lead-lead collisions at a center-of-mass energy of 2.76 TeV. Due to the limited amount of data, a focus is placed on achieving high efficiencies throughout the analysis. To this end, a special electron identification strategy is developed and a custom track selection applied, together resulting in a tenfold increase in pair efficiency. The dielectron spectrum is evaluated on a statistical basis, using a pair prefilter, which is optimized based on two signal quality criteria, to reduce the fraction of electrons and positrons from unwanted sources at minimum signal loss. In addition, an artifact of the track reconstruction is exploited to suppress pairs from photon conversions and to correct the dielectron yield for a contribution from different-conversion pairs. The main signal uncertainty is extracted from the deviation between results of 20 analysis settings and amounts to 20% in most of the studied kinematic range.
For comparison with the analysis results, a hadronic cocktail consisting of the LF and HF contributions is simulated, which can reasonably well describe the measured dielectron production, with a hint of an enhancement at low invariant mass. Two approaches to model the in-medium modification of the heavy-flavour are followed, resulting in up to 50% suppression, which creates some additional space for a thermal contribution at intermediate mass.
For a complete comparison between experimental data and theoretical expectation, two model calculations are consulted. The Thermal Fireball Model provides predictions for thermal dielectron radiation from the QGP and hadron gas. The data tends to be better described with these additional thermal contributions. For a comparison with a prediction by the UrQMD model, the HF component of the cocktail is subtracted from the data. This results in better agreement if the HF suppression by in-medium effects is taken into account.
The feasibility study in this work has served as a physical motivation for the ALICE upgrade for LHC Run 3. The precision with which the early temperature of the QGP can be determined via dielectrons is chosen as key observable. A multitude of individual contributions are merged into a fully modeled dielectron analysis. The resulting signal-to-background ratio represents some of the expected systematic uncertainties, while from the significance combined with the planned number of lead-lead collisions a realistic "measurement" with statistical fluctuations around the expected dielectron signal is generated using a Poisson sampling technique. Since the HF yield exceeds the QGP thermal radiation by about an order of magnitude, an additional analysis step exploiting the enhanced track reconstruction is introduced to reduce its contribution by up to a factor of five. The resulting reduction in pair efficiency is overcompensated by an up to hundred times higher collision rate. The entire cocktail is then subtracted from the sampled data to isolate the thermal excess yield. The final analysis of this spectrum shows that the inverse slope of the model prediction, which depends directly on the QGP temperature, can be reproduced within statistical and systematic uncertainties of about 10%.
The promising results of this study have contributed on the one hand to the realization of the ALICE upgrade and to a design decision for the new Inner Tracking System, and at the same time represent exciting predictions for upcoming measurements.
Die vorliegende Dissertation stellt die Strahldynamikdesigns zweier Hochfrequenzquadrupol-Linearbeschleuniger bzw. Radio Frequency Quadrupoles (RFQs) vor: das fur den RFQ des Protonen-Linearbeschleunigers (p-Linac) des FAIR2-Projekts an der GSI3 Darmstadt sowie einen ersten Designentwurf für einen kompakten RFQ, der u.a. zur Erzeugung von Radioisotopen für medizinische Zwecke genutzt werden könnte. Der Schwerpunkt liegt auf dem ersten Design.
For finite baryon chemical potential, conventional lattice descriptions of quantum chromodynamics (QCD) have a sign problem which prevents straightforward simulations based on importance sampling.
In this thesis we investigate heavy dense QCD by representing lattice QCD with Wilson fermions at finite temperature and density in terms of Polyakov loops.
We discuss the derivation of $3$-dimensional effective Polyakov loop theories from lattice QCD based on a combined strong coupling and hopping parameter expansion, which is valid for heavy quarks.
The finite density sign problem is milder in these theories and they are also amenable to analytic evaluations.
The analytic evaluation of Polyakov loop theories via series expansion techniques is illustrated by using them to evaluate the $\SU{3}$ spin model.
We compute the free energy density to $14$th order in the nearest neighbor coupling and find that predictions for the equation of state agree with simulations to $\mathcal{O}(1\%)$ in the phase were the (approximate) $Z(3)$ center symmetry is intact.
The critical end point is also determined but with less accuracy and our results agree with numerical results to $\mathcal{O}(10\%)$.
While the accuracy for the endpoint is limited for the current length of the series, analytic tools provide valuable insight and are more flexible.
Furthermore they can be generalized to Polyakov-loop-theories with $n$-point interactions.
We also take a detailed look at the hopping expansion for the derivation of the effective theory.
The exponentiation of the action is discussed by using a polymer expansion and we also explain how to obtain logarithmic resummations for all contributions, which will be achieved by employing the finite cluster method know from condensed matter physics.
The finite cluster method can also be used to evaluate the effective theory and comparisons of the evaluation of the effective action and a direction evaluation of the partition function are made.
We observe that terms in the evaluation of the effective theory correspond to partial contractions in the application of Wick's theorem for the evaluation of Grassmann-valued integrals.
Potential problems arising from this fact are explored.
Next to next to leading order results from the hopping expansion are used to analyze and compare the onset transition both for baryon and isospin chemical potential.
Lattice QCD with an isospin chemical potential does not have a sign problem and can serve as a valuable cross-check.
Since we are restricted by the relatively short length of our series, we content ourselves with observing some qualitative phenomenological properties arising in the effective theory which are relevant for the onset transition.
Finally, we generalize our results to arbitrary number of colors $N_c$.
We investigate the transition from a hadron gas to baryon condensation and find that for any finite lattice spacing the transition becomes stronger when $N_c$ is increased and to be first order in the limit of infinite $N_c$.
Beyond the onset, the pressure is shown to scale as $p \sim N_c$ through all available orders in the hopping expansion, which is characteristic for a phase termed quarkyonic matter in the literature.
Some care has to be taken when approaching the continuum, as we find that the continuum limit has to be taken before the large $N_c$ limit.
Although we currently are unable to take the limits in this order, our results are stable in the controlled range of lattice spacings when the limits are approached in this order.
Neurons are cells with a highly complex morphology; their dendritic arbor spans up to thousands of micrometers. This extended arbor poses a challenge for the logistics of neuronal processes: mRNA, proteins, and organelles have to be transported to dendrites, hundreds of micrometers away from the soma. This thesis aims to calculate the minimum number of proteins needed to populate the dendritic trees for different scenarios.
In chapter 2, I analyzed the ability of different mechanisms to populate the dendritic arbor. I started from the solution of the diffusion equation in Sec. 2.1, then I included the contribution of active transport in Sec. 2.2 and showed how it could have either the effect of increasing the effective diffusion coefficient or of introducing a bias in the diffusion process. In Sec. 2.3 I studied the spatial distribution of locally synthesized protein, accordingly with actively and passively transported mRNA. In Sec. 2.5, I derived the boundary condition for branches showing a qualitatively different behavior of surface and cytoplasmic proteins induced by the medium’s dimensionality in which they diffuse.
In chapter 3, I introduced the concept of protein requirement, defined as the minimum number of proteins that the neuron needs to produce to provide at least one protein to each micrometer of the dendritic arbor. In Sec. 3.1, I derived the protein requirement for diffusive proteins for somatic translation and constant translation in the dendritic arbor. In Sec. 3.2, I analyzed numerically the protein requirement in the case of actively transported protein synthesized in the soma, and, in Sec. 3.3, in the case of actively transported proteins synthesized in the dendritic arbor. In Sec. 3.4, I analyzed the protein requirement of protein synthesized in the dendrite accordingly with the distribution of mRNA described in Sec. 3.3 and 3.2. In Sec. 3.5, I derived the protein requirement for a single branch and purely diffusive proteins.
In chapter 4, I analyzed the relation between the radii of the three afferent dendrites in a branch, their length, and the diffusion length of a protein. In Sec. 4.1 I derived the optimal ratio between the radii of the daughter dendrites that minimizes the protein requirement. In Sec. 4.3 I introduced the 3/2− Rall Rule and in Sec. 4.5 its generalization. Finally, I used those rules to estimate the fraction of proteins diffusing away from and toward the soma.
In chapter 5, I analyzed the radii distribution for three categories of neurons: cultured hippocampal neurons in Sec. 5.1, stomatogastric ganglia neuron in Sec. 5.2, and 3DEM reconstructed prefrontal pyramidal neurons in Sec. 5.3. For each of these three classes, I analyzed the distribution of radii, Rall exponents, and the probability ratio. For most of them, I found that the probability of a protein diffusing away from the soma is higher for surface proteins than for cytoplasmic ones. I quantified this with a parameter called surface bias.
In Chapter 6, I analyzed the fluorescent ratio imaged by our collaborators Anne-Sophie Hafner, for a surface protein, GFP::Nlg, and a soluble one, GFP, in cultured hippocampal neurons, and I compared the fluorescent ratio with the probability ratio obtained in 5.1, finding that they are in good agreement.
In chapter 7, I compared the real dendritic morphologies imaged by one of our collaborators Ali Karimi with the optimal branching rule obtained in Sec. 4.1 and I calculated the cost for not having optimal branching radii.
Finally, in Chapter 8, I used the knowledge of the branching statistics gathered in 5.3 to simulate the protein profile on three different classes of neurons: pyramidal neurons, granule neuron, and Purkinje neurons. I compared the protein profile for surface and cytoplasmic neurons for each morphology for two different values of the diffusion length: λ = 109µm and λ = 473µm, both for optimized radii and symmetrical radii. I showed how the radii optimization reduces the protein requirement of a factor 10 4 for pyramidal neurons.
Particle collisions provide insight into the structure of matter and the interaction of its constituents. Furthermore, they also allow a better understanding of the processes involved in the formation of the universe. To cover these diverse areas, it is necessary to study different observables and collision systems. A particular challenge is to find a suitable measurable observable for a theoretically meaningful variable and to develop a measurement process taking into account the experiment. The analyses of particle collisions in this thesis cover many of the challenges and objectives mentioned above. The focus of the work is the analysis of isolated photons at an energy of √s = 7 TeV. In addition, the work also includes measurements of the average transverse momentum in Pb-Pb collisions at an energy of √s = 2.76 TeV.
Apart from the collision system, the two analyses complement each other in other respects. The measurement of isolated photons represents the first measurement of this observable with ALICE and thus lays the foundation for further measurements at other collision systems and energies. The measurement of the mean transverse momentum, on the other hand, is based on an established measurement and thus allows the comparison of different collision systems. Likewise, the physical processes studied differ. With the measurement of isolated photons, hard scattering processes in the collisions can be investigated, while the average transverse momentum allows a description of the underlying event.
When measuring isolated photons, it should be noted that isolated photons are a measurable observable that cannot be assigned to an explicit physical process. The isolation criterion used in the analysis serves to increase the fraction of prompt photons from 2→2 processes. These photons can contribute to a better understanding of the parton density function (PDF) of gluons, as well as be used as a reference for perturbative QCD calculations.
Of particular importance for the analysis are the cluster shape and the energy within a certain radius around the potential photon. The combination of these two quantities allows determining the background using the ABCD method established by CDF and ATLAS. The result obtained in this way extends the previous measurements of the cross-section of isolated photons at the LHC to lower transverse momenta. Similarly, the previous measurements of the cross-section as a function of the scale variable xT are extended to lower values.
The main focus of the measurement of the average transverse momentum of charged particles ⟨pT⟩ is to compare the measurement for the pp, p-Pb, and Pb-Pb collision systems. To obtain a direct comparison between the different collision systems, ⟨pT ⟩ is measured against the true multiplicity nch. Since the multiplicity range of pp and p-Pb collisions is limited, the analysis in Pb-Pb collisions is restricted to nch = 100. This range corresponds to peripheral Pb-Pb collisions. A particular focus of the analysis is the determination and reduction of the electromagnetic background in peripheral Pb-Pb collisions and the determination of nch based on the measured multiplicity nacc . The different collision systems show similar behavior with increasing multiplicity. The steepest increase occurs at low multiplicities and changes for all collision systems at nch = 14. With higher multiplicities, the slope reduces further, with the effect being most pronounced in Pb-Pb collisions.
This dissertation describes the development of the beam dynamics design of a novel superconducting linear accelerator. At a main operating frequency of 216.816 MHz, ions with a mass-to-charge ratio of up to 6 can be accelerated at high duty cycles up to CW operation. Intended for construction at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, the focus of the work is on the beam dynamic design of the accelerator section downstream of the high charge injector (HLI) at an injection energy of 1.39 MeV/u. An essential feature of this linear accelerator (Linac) is the use of the EQUUS (Equidistant Multigap Structure) beam dynamics concept for a variably adjustable output energy between 3.5 and 7.3 MeV/u (corresponding to about 12.4 % of the speed of light) with a required low energy spread of maximum 3 keV/u.
The GSI Helmholtz Centre for Heavy Ion Research is a large-scale research facility that uses its particle accelerators to perform basic research with ion beams. Research on super-heavy elements ("SHE") is a major focus. It is expected that their production and research will provide answers to a large number of scientific questions. The production and detection of elements with atomic numbers 107 to 112 (Bohrium, Hassium, Meitnerium, Darmstadtium, Röntgenium and Copernicium) was first achieved at GSI between 1981 and 1996.
Key to this remarkable progress in SHE research were continuous developments and technical innovations. On the one hand, in the field of experimental sensitivity and detection of the nuclear reaction products and, on the other hand, in the field of accelerator technology.
For the acceleration of the projectile beam, the UNILAC (Universal Linear Accelerator), which was put into operation in 1975, has been used at GSI so far. In the course of the reconstruction and expansion of the research infrastructure at GSI, a dedicated new particle accelerator, HELIAC (Helmholtz Linear Accelerator), is now under development to meet the special requirements of the beam parameters for the synthesis of new superheavy elements. Typically, the production rates of super-heavy elements with effective cross sections in the picobarn range are very low. Therefore, a high duty cycle (up to CW operation) is a key feature of HELIAC. Thus, the required beam time for the desired nuclear reactions can be significantly shortened.
Theoretical preliminary work by Minaev et al. and newly created knowledge about design, fabrication, and operation of superconducting drift tube cavities have laid the foundation for this work and thus the development of the HELIAC linear accelerator. It consists of a superconducting and a normal conducting part. Acceleration takes place in the superconducting part in four cryomodules, each about 5 m long. These contain three CH cavities, one buncher cavity, two solenoid magnets for transverse beam focusing, and two beam position monitors (BPMs).
The following 10 m long normal conducting part is primarily used for beam transport and ends with a buncher cavity. This is operated at a halved frequency of 108.408 MHz.
A key feature of this accelerator is the variability of the output energy from 3.5 to 7.3 MeV/u with a small energy uncertainty of ±3 keV/u maximum over the entire output energy range. For the development of HELIAC, the EQUUS beam dynamics concept used combined the advantages of conventional linac designs with the high acceleration gradients of superconducting CH-DTLs. By doubling the frequency (compared to the GSI high charge injector) to 216.816 MHz in the superconducting section and using CH cavities at an acceleration gradient of maximum 7.1 MV/m, an acceleration efficiency with superconducting drift tube structures that is unique in the world is made possible. At the same time, the compact lengths of the CH cavities ensure good handling for both production and operation. EQUUS leads to longitudinal beam stability in all energy ranges of the accelerator with the sliding motion of the synchronous phase within each CH cavity. The rms emittance growth is moderate in all levels. The modular design of the HELIAC with four cryomodules basically allows the Linac to be commissioned starting with the first cryomodule, the so-called Advanced Demonstrator. In the subsequent expansion stage with only the first two cryomodules of HELIAC, the lower limit of the energy range to be provided by HELIAC (3.5 MeV/u) can already be clearly exceeded, so that use in regular beam operation at GSI is already conceivable from here on.
By means of error tolerance studies, the stability of the HELIAC beam dynamics design against possible alignment errors of the magnetic focusing elements and accelerator cavities as well as errors of the electric field amplitudes and phases have been investigated, basically confirmed and critical parameters have been determined. An additional steering concept via dipole correction coils at the solenoid magnets allows transverse beam control as well as diagnostics by means of two BPMs per cryomodule.
With completion of this work in 2021, the CH1 and CH2 cavities have already been built and are in the final preparation and cold test phase. In parallel, the development of the CH cavities CH3-11 has also been started.
The topic of this thesis is the theoretical description of the hadron gas stages in heavy-ion collisions. The overall addressed question hereby is: How does the hadronic medium evolve i.e. what are the relevant microscopic reaction mechanisms and the properties of the involved degrees of freedom? The main goal is to address this question specifically for hadronic multi-particle interactions. For this goal, the hadronic transport approach SMASH is extended with stochastic rates, which allow to include detailed balance fulfilling multi-particle reactions in the approach. Three types of reactions are newly-accounted for: 3-to-1, 3-to-2 and 5-to-2 reactions. After extensive verifications of the stochastic rates approach, they are used to study the effect of multi-particle interactions, particularly in afterburner calculations.
These studies follow complementary results for the dilepton and strangeness production with only binary reactions, which show that hadronic transport approaches are capable of describing observables when employed for the entire evolution of low-energy heavy-ion collisions. This is illustrated by the agreement of dilepton and strangeness production for smaller systems with SMASH calculations. It is, in particular, possible to match the measured strangeness production of phi and Xi hadrons via additional heavy nucleon resonance decay channels. For larger systems or higher energies, hadronic transport cascade calculations with vacuum resonance properties can point to medium effects. This is demonstrated extensively for the dilepton emission in comparisons to the full set of HADES dielectron data. The dilepton invariant mass spectra are sensitive to a medium modification of the vector meson spectral function for large collision systems already at low beam energies. The sensitivity to medium modifications is mapped out in detail by comparisons to a coarse-graining approach, which employs medium-modified spectral functions and is based on the same evolution.
The theoretical foundation of stochastic rates are collision probabilities derived from the Boltzmann equation's collision term with the assumption of a constant matrix element. This derivation is presented in a comprehensive and pedagogical fashion. The derived collision probabilities are employed for a stochastic collision criterion and various detailed-balance fulfilling multi-particle reactions: the mesonic Dalitz decay back-reaction (3-to-1), the deuteron catalysis (3-to-2) and the proton-antiproton annihilation back-reaction (5-to-2). The introduced stochastic rates approach is extensively verified by studies of the numerical stability and comparisons to previous results and analytic expectations. The stochastic rates results agree perfectly with the respective analytic results.
Physically, multi-particle reactions are demonstrated to be significant for different observables, most notably the yield of the partaking particles, even in the late dilute stage of heavy-ion reactions. They lead to a faster equilibration of the system than equivalent binary multi-step treatments. The difference in equilibration consequently influences the yield in afterburner calculations. Interestingly, the interpretation of results is not dependent on employing multi-particle or multi-step treatments, which a posteriori validates the latter.
As the first test case of multi-particle reactions in heavy-ion reactions, the mesonic 3-to-1 Dalitz decay is found to be dominated by the omega Dalitz decay back-reaction. While the effect on the medium is found to be negligible overall, the regeneration is found to be sizable: up to a quarter of Dalitz decays are regenerated.
Non-equilibrium rescattering effects are shown to be relevant for late collision stages for two particle species: deuteron and protons. In both cases, the relevant rescatterings involve multiple particles.
The deuteron pion and nucleon catalysis reactions equilibrate quickly in the afterburner stage at intermediate energies. The constant formation and destruction keeps the yield constant and microscopically explains the "snowballs in hell"-paradox. The yield is also generated with no d present at early times, which explains why coalescence models can also match the multiplicity.
New is the study of the 5-body back-reaction of proton-antiproton annihilations. This work marks the first realization of microscopic 5-body reactions in a transport approach to fulfill detailed balance for such reactions. A sizable regeneration due to the back-reaction of up to half of the proton-antiproton pairs lost due to annihilations is found. Consequently, both annihilation and regeneration in the late non-equilibrium stage are shown to have a significant effect on the p yield.
This thesis deals with the phenomenology of QCD matter, its aspects in heavy ion collisions and in neutron stars. The first half of the work focuses on the hadronic phase of QCD matter. One focus is on how the hadronic phase shows itself in heavy ion collisions and how its dynamics can be simulated. The role of hadronic interactions is considered in the context of the lattice QCD data. The second part of this thesis presents a unified approach to QCD matter, the CMF model. The CMF model incorporates many aspects of QCD phenomenology which allows for a consistent description of the hadron-quark transition, making it applicable to the entire QCD phase diagram, i.e., to the cold nuclear matter and to the hot QCD matter. It is shown that a description of both the hot matter created in heavy ion collisions and the cold dense matter in neutron star interiors is possible within one single approach, the CMF model.
Next-generation DIRC detectors, like the PANDA Barrel DIRC, with improved optical designs and better spatial and timing resolution, require correspondingly advanced reconstruction and PID methods. The investigation of the PID performance of two DIRC counters and the evaluation of the reconstruction and PID algorithms form the core of this thesis. Several reconstruction and PID approaches were developed, optimized, and tested using hadronic beam particles, experimental physics events, and Geant simulations. The near-final design of the PANDA Barrel DIRC was evaluated with a prototype in the T9 beamline at CERN in 2018. The analysis finds excellent agreement between the experimental data and the Geant simulations for all reconstruction algorithms. The best PID performance of up to $5.2 \pm 0.2$ s.d. $\pi$/K separation at 3.5 GeV/c, was obtained with a time imaging PID method. The PANDA Barrel DIRC simulation, as well as the reconstruction and PID algorithms, were evaluated using experimental data from the GlueX DIRC as part of the FAIR Phase-0 program. The performance validation was carried out using physics events of the GlueX experiment and simulations. The initial analysis results of the commissioning dataset show a $\pi$/K separation power of up to 3 s.d. at a momentum of 3.0-3.5 GeV/c, obtained using a geometric reconstruction algorithm.
Terahertz (THz) technology is an emerging field that considers the radiation between microwave and far-infrared regions where the electronic and photonic technologies merge. THz generation and THz sensing technologies should fill the gap between photonics and electronics which is defined as a region where THz generation power and THz sensing capabilities are at a low technology readiness level (TRL). As one of the options for THz detection technology, field-effect transistors with integrated antennae were suggested to be used as THz detectors in the 1990s by M. Dyakonov and M. Shur from where the development of field-effect transistor-based detector began. In this work, various FET technologies are presented, such as CMOS, AlGaN/GaN, and graphene-based material systems and their further sensitivity enhancement in order to reach the performance of well-developed Schottky diode-based THz sensing technology. Here presented FET-based detectors were explored in a wide frequency range from 0.1 THz up to 5 THz in narrowband and broadband configurations.
For proper implementation of THz detectors, the well-defined characterization is of high importance. Therefore, this work overviews the characterization methods, establishes various definitions of detector parameters, and summarizes the state-of-the-art THz detectors. The electrical, optical, and cryogenic characterization techniques are also presented here, as well as the best results obtained by the development of the characterization methods, namely graphene FET stabilization, low-power THz source characterization for detector calibration, and technology development for cryogenic detection.
Following the discussion about the detector characterization, a wide range of THz applications, which were tested during the last four years of Ph.D. and conducted under the ITN CELTA project from HORIZON2020 program, are presented in this work. The studies began with spectroscopy applications and imaging and later developed towards hyperspectral imaging and even passive imaging of human body THz radiation. As various options for THz applications, single-pixel detectors as well as multi-pixel arrays are also covered in this work.
The conducted research shows that FET-based detectors can be used for spectroscopy applications or be easily adapted for the relevant frequency range. State-of-the-art detectors considered in this work reach the resonant performance below 20 pW/√Hz at 0.3 THz and 0.5 THz, as well as 404 pW/√Hz cross-sectional NEP at 4.75 THz. The broadband detectors show NEP as low as 25 pW/√Hz at around 0.6 THz for the best AlGaN/GaN design and 25 pW/√Hz around 1 THz for the best CMOS design. As one of the most promising applications, metamaterial characterization was tested using the most sensitive devices. Furthermore, one of the single-pixel devices and a multi-pixel array were tested as an engineering solution for a radio astronomy system called GREAT in a stratosphere observatory named SOFIA. The exploration of the autocorrelation technique using FET-based devices shows the opportunity to employ such detectors for direct detection of THz pulses without an interferometric measurement setup.
This work also considers imaging applications, which include near-field and far-field visualization solutions. A considerable milestone for the theory of FET technology was achieved when scanning near-field microscopy led to the visualization of plasma (or carrier density) waves in a graphene FET channel. Whereas another important milestone for the THz technology was achieved when a 3D scan of a mobile phone was performed under the far-field imaging mode. Even though the imaging was done through the phone’s plastic cover, the image displayed high accuracy and good feature recognition of the smartphone, inching the FET-based detector technology ever so close to practical security applications. In parallel, the multi-pixel array testing was carried out on 6x7 pixel arrays that have been implemented in configurable-size aperture and imaging configurations. The configurable aperture size allowed the easier detector focusing procedure and a better fit for the beam size of the incident radiation. The imaging has been tested on various THz sources and compared to the TeraSense 16x16 pixel array. The experimental results show the big advantage of the developed multi-pixel array against the used commercial technology.
Furthermore, two ultra-low-power applications have been successfully tested. The application on hyper-frequency THz imaging tested in the specially developed dual frequency comb and our detector system for 300 GHz radiation with 9 spectral lines led to outstanding imaging results on various materials. The passive imaging of human body radiation was conducted using the most sensitive broadband CMOS detector with a log-spiral antenna working in the 0.1 – 1.5 THz range and reaching the optical NEP of 42 pW/√Hz. The NETD of this device reaches 2.1 K and overcomes the performance limit of passive room-temperature imaging of the human body radiation, which was less than 10 K above the room temperature. This experiment opened a completely new field that was explored before only by the multiplier chain-based or thermal detectors.
...
The Compressed Baryonic Matter (CBM) Experiment will investigate heavy ion collisions and reactions at interaction rates of 100 kHz in a targeted energy range of up to 11 AGeV for systems such as gold-gold or lead-lead. It will be one of the major scientific experiments of the Facility for Antiproton and Ion Research in Europe (FAIR) currently under construction at the site of the GSI Helmholtzzentrum für Schwerionenforschung (GSI) in Darmstadt, Germany. CBM is going to be a fixed target experiment consisting of a superconducting magnet, multiple detectors of various types, and high-performance computing for online event reconstruction and selection. The detector closest to the interaction point of the experiment will be the Micro Vertex Detector (MVD). Consisting of four planar stations equipped with custom CMOS pixel sensors, it will allow to reconstruct the primary vertex with high precision and will help to reconstruct secondary vertices and identify particles originating from conversion in the detector material.
Due to the high interaction rates foreseen for CBM, understanding and minimizing systematic errors due to the detectors’ operating conditions will become all the more important to obtain significant measurement results, as statistical errors in the measurements of many observables are diminishing due to the enormous amount of data available.
Furthermore, the MVD will be the first detector based on CMOS pixel sensors used in a large physics experiment, that will be operated in vacuum. As a result, many aspects of the mechanical and electrical integration of the detector require careful testing and validation.
This thesis addresses both those challenges specifically for the Micro Vertex Detector with the development of a control system for the operation and validation of the MVD prototype “PRESTO” in vacuum. The prototype was selected as device under test as the final MVD is not yet built.
The developed control system helps a) to operate the prototype safely and keep it at the desired working point and b) to record important time-series data of the state of the detector prototype. Those two aspects allow the control system (which might later serve as a ‘blueprint’ for the final detector) to minimize the mentioned systematic errors as much as possible and to contribute to the understanding of remaining systematic errors using correlations with the time-series data. The controlled operation of the prototype in vacuum allowed to validate the integration concepts from a wide range of mechanical and electrical aspects in an endurance test for more than a year with 24/7 operation.
The prototype for this study itself was named “PRESTO” (standing for ‘PREcursor of the Second sTatiOn of the CBM-MVD’). It represents one quadrant of an MVD detector plane, equipped with a total of 15 MIMOSA-26 sensors on the front and back side of a carrier plate. Within this thesis, major parts of the prototype itself were designed. Custom ultra-thin flat flexible cables for data and power were designed and validated. Furthermore, the CNC-machined Aluminium heatsink to mount and cool the prototype design was refined to increase thermal performance. A custom vacuum feedthrough for a total of 21 flat ribbon cables was designed and fabricated. The read-out chain for MIMOSIS-26 was extended to cover a total of 8 sensors with a single and newer TRB-3 FPGA board and was set-up with the prototype. Vacuum equipment including chambers, hoses, pumps, valves and gauges were integrated to form a large vacuum testing system. A cooling circuit for the prototype was assembled comprising an external chiller, hoses, vacuum feedthroughs, as well as temperature, flow and pressure sensors.
The control system was developed to serve the needs of the prototype, while taking the requirements of the final MVD already into account. The main design goals of the control system are:
• compatibility with the other detectors and the overall CBM experiment,
• access to real-time measurements of all necessary parameters (‘process values’),
• reliable, fail-safe operation of the detector,
• recording of all time-series data (‘archiving’),
• cost efficiency and acceptance within the physics community,
• good usability for the users (‘operators’),
• long-term maintainability.
The recorded time-series data of the process variables (i.e. sensor readings) allow a post-measurement analysis of variations in the detector performance. The longterm archiving of all relevant system parameters is therefore of outstanding importance, which is why the software intended for this purpose – called “archiver” – was given special attention in this thesis.
For this reason in particular, it is necessary to implement a comprehensive control system that allows the detector to be operated safely under these conditions and cooled effectively. Before the start of this doctoral thesis, vigilant and extensively trained operators were always necessary for this. The control system that has been developed makes it possible that, after basic training, the detector can also be operated by a less specialised shift supervisor during measurement campaigns.
...
Die vorliegende Dissertation behandelt das Thema der Wechselstromleitfähigkeit nano-granularer Metalle, welche mit Hilfe der fokussierten elektronenstrahlinduzierten Direktabscheidung (FEBID) hergestellt wurden, sowie der dielektrischen Relaxation in metall-organischen Gerüstverbindungen (MOFs). Sie war eingebettet in das interdisziplinäre Projekt „Dielectric and Ferroelectric Surface-Mounted Metal-Organic Frameworks (SURMOFs) as Sensor Devices“ im Rahmen des DPG-Schwerpunktsprogramms „Coordination Networks: Building Blocks for Functional Systems“ (SPP 1928, COORNETs). Dabei verfolgt sie ein Sensorkonzept zur selektiven Detektion von Analytgasen. Der zentrale Erfolg der Arbeit besteht dabei in neuen Erkenntnissen über die Wechselstromleitfähigkeit nano-granularer Pt(C)-FEBID-Deponate. Die hierbei gewonnen Erkenntnisse können in Zukunft einen weiteren Baustein in der theoretischen Beschreibung dieses grundlegend interessanten und für sensorische Anwendungen wichtigen Teilgebiets der Festkörperphysik darstellen.
High-energy astrophysics plays an increasingly important role in the understanding of our universe. On one hand, this is due to ground-breaking observations, like the gravitational-wave detections of the LIGO and Virgo network or the black-hole shadow observations of the EHT collaboration. On the other hand, the field of numerical relativity has reached a level of sophistication that allows for realistic simulations that include all four fundamental forces of nature. A prime example of how observations and theory complement each other can be seen in the studies following GW170817, the first detection of gravitational waves from a binary neutron-star merger. The same detection is also the chronological starting point of this Thesis. The plethora of information and constraints on nuclear physics derived from GW170817 in conjunction with theoretical computations will be presented in the first part of this Thesis. The second part goes beyond this detection and prepares for future observations when also the high-frequency postmerger signal will become detectable. Specifically, signatures of a quark-hadron phase transition are discussed and the specific case of a delayed phase transition is analyzed in detail. Finally, the third part of this Thesis focuses on the inclusion of radiative transport in numerical astrophysics. In the context of binary neutron-star mergers, radiation in the form of neutrinos is crucial for realistic long-term simulations. Two methods are introduced for treating radiation: the approximate state-of-the-art two-moment method (M1) and the recently developed radiative Lattice-Boltzmann method. The latter promises
to be more accurate than M1 at a comparable computational cost. Given that most methods for radiative transport or either inaccurate or unfeasible, the derivation of this new method represents a novel and possibly paradigm-changing contribution to an accurate inclusion of radiation in numerical astrophysics.
Während den ersten Mikrosekunden nach dem Urknall glaubt man, dass unser Universum aus einer heißen, dichten und stark wechselwirkenden Materie bestanden haben soll, welche man das Quark-Gluonen-Plasma (QGP) nennt.
In diesem Medium sind die elementaren Bausteine der Materie, die Quarks und die Gluonen, nicht mehr in Hadronen gebunden, sondern können sich stattdessen wie quasi-freie Teilchen verhalten.
Für die ALICE Kollaboration an CERN's Large Hadron Collider (LHC) ist die Untersuchung dieses Mediums eines der Hauptziele. Um dieses Medium im Labor zu erzeugen, werden Protonen und Nukleonen auf nahezu Lichtgeschwindigkeit beschleunigt und anschließend zur Kollision gebracht. Dabei werden Schwerpunktsenergien von bis zu 13 TeV bei Proton-Proton (pp) Kollisionen und bis zu 5.02 TeV bei Blei-Blei (Pb--Pb) Kollisionen erreicht.
Bei solchen hochenergetischen Kollisionen werden die kritischen Werte der Energiedichte und Temperatur von jeweils ungefähr 1 GeV/c und undgefähr 155 MeV überschritten, welche mithilfe von "lattice QCD" bestimmt wurden. Sie bieten daher die perfekten Voraussetzungen für einen Phasenübergang von normaler Materie zu einem QGP.
Die Entwicklung eines solchen Mediums, beginnend bei der eigentlichen Kollision, gefolgt von der Ausbildung des Plasmas und der letztendlichen Hadronisierung, kann jedoch nicht direkt untersucht werden, da das Plasma eine extrem kurze Lebensdauer hat.
Die Studien die das QGP untersuchen möchten, müssen sich deshalb auf Teilchenmessungen und deren Veränderung aufgrund von Einflüssen durch das Medium beschränken.
Es ist noch nicht definitiv geklärt, ob sich ein QGP nur in Kollisionen schwerer Ionen bildet, oder ob dies auch in kleineren Kollisionssystemen wie Proton-Proton oder Proton-Blei der Fall ist.
Damit in dieser Thesis Einschränkungen bezüglich einer möglichen Erzeugung eines mini-GQP in kleinen Kollisionssystemen gemacht werden kann, wird der Fokus auf Messungen von neutralen Pionen und Eta Mesonen mit dem ALICE Detektor am CERN LHC gesetzt. Hierfür wird in einem Referenzsystem von Proton-Proton Kollisionen bei sqrt(s)=8 TeV und in einem Proton-Blei (p--Pb) System bei sqrt(sNN)=8.16 TeV, welches eine nukleare Modifikation erfährt, gemessen und die Ergebnisse verglichen.
Da in Proton-Proton Kollisionen die Bildung eines QGP, aufgrund zu geringer Energiedichte, nicht erwartet wird, dient eine Messung in diesem System als Messbasis, um Effekte der Kollision selbst von Effekten nach der Kollision zu separieren, welche die Teilchenproduktion beeinflussen.
Teilchen können zusätzlich zu dem QGP auch mit kalter Kernmaterie interagieren, was sich in asymmetrischen Proton-Blei Kollisionen testen lässt. In diesem Kollisionssystem wird größtenfalls ein vergleichsweise kleines QGP gebildet, wohingegen das Blei Ion selbst als kalte Kernmaterie agieren kann.
Zusätzlich zu den Mesonenmessungen wird in dieser Thesis auch die Erzeugung von direkten Photonen bei niedrigen Transversalimpulsen (pT) in multiplizitätsabhängigen p--Pb Kollisionen bei einer Schwerpunktsenergie von sNN=5.02 TeV gemessen, welche als direkte Probe, sowie als charakteristisches Signal des QGP gilt.
Die neutralen Pionen, welche in dieser Thesis gemessen werden, kann man als einen Überlagerungszustand der zwei leichtesten Quarksorten, dem "up" (u) und dem "down" (d) Quark, sowie deren entsprechenden Anti-Teilchen verstehen.
Das eta meson hingegen hat einen zusätzlichen Anteil des "strange" Quarks und eine resultierende höhere Masse.
Quarks sind Teil des Standardmodells der Teilchenphysik, welches die Elementarteilchen und die zwischen ihnen wirkenden Elementarkräfte, ausgeübt durch Bosonen, beschreibt.
Das Modell umfasst insgesamt sechs Quarks, welche sich durch ihre Masse und Ladung unterscheiden und als Grundbestandteil von gebundenen Zuständen, sogenannten Hadronen, fungieren.
Die "up" und "down" Quarks gelten hierbei als die leichtesten Quarks und kommen daher am häufigsten in der Natur vor. Das bekannteste Beipiel stellen hier die allgemein bekannten Protonen (uud) und Neutronen (udd) dar, welche die Grundkomponenten von Nukleonen sind.
Die restlichen Quarks tragen eine deutlich höhere Masse und haben daher eine große Tendenz, sich in leichtere Quarks umzuwandeln, wodurch ihre Lebensdauer sehr gering ist. Die "top" und "bottom" Quarks, welche die Schwersten sind, können daher nicht in gewöhnlicher Materie gefunden werden.
Sie können jedoch experimentell durch hoch energetische Teilchenkollisionen erzeugt werden und indirekt über ihre Zerfallsprodukte nachgewiesen werden.
Quarks tragen eine elektrische Ladung von entweder 1/3 oder 2/3, sowie eine Farbladung, wobei Letztere verantwortlich für ihre Bindung in Hadronen ist.
Hadronen bestehen entweder aus drei Quarks, dann werden sie Baryonen genannt, oder aus einem Quark-Antiquark Paar, welches Meson genannt wird.
Diese gebundenen Zustände erfüllen eine insgesamt neutrale Farbladung, sowie eine vollzählige elektrische Ladung.
Des Weiteren gibt es auch exotische Penta-Quark Zustände, welche aus vier Quarks und einem Antiquark bestehen und bereits experimentell nachgewiesen wurden.
Aufgrund der starken Wechselwirkung, welche durch Gluonen vermittelt wird, können Quarks nicht einzeln beobachtet werden.
...
This thesis explores the phase diagrams of the Nambu--Jona-Lasinio (NJL) and quark-meson (QM) model in the mean-field approximation and beyond. The focus lies in the investigation of the interplay between inhomogeneous chiral condensates and two-flavor color superconductivity.
In the first part of this thesis, we study the NJL model with 2SC diquarks in the mean-field approximation and determine the dispersion relations for quasiparticle excitations for generic spatial modulations of the chiral condensate in the presence of a homogeneous 2SC-diquark condensate, provided that the dispersion relations in the absence of color superconductivity are known. We then compare two different Ansätze for the chiral order parameter, the chiral density wave (CDW) and the real-kink crystal (RKC). For both Ansätze we find for specific diquark couplings a so-called coexistence phase where both the inhomogeneous chiral condensate and the diquark condensate coexist. Increasing the diquark coupling disfavors the coexistence phase in favor of a pure diquark phase.
On the other hand, decreasing the diquark coupling favors the inhomogeneous phase over the coexistence phase.
In the second part of this thesis the functional renormalization group is employed to study the phase diagram of the quark-meson-diquark model. We observe that the region of the phase diagram found in previous studies, where the entropy density takes on unphysical negative values, vanishes when including diquark degrees of freedom. Furthermore, we perform a stability analysis of the homogeneous phase and compare the results with those of previous studies. We find that an increasing diquark coupling leads to a smaller region of instability as the 2SC phase extends to a smaller chemical potential. We also find a region where simultaneously an instability occurs and a non-vanishing diquark condensate forms, which is an indication of the existence of a coexistence phase in accordance with the results of the first part of this work.
Bohmian mechanics as formulated originally in 1952, has been useful in the implementation of numerical methods applied to quantum mechanics. The scientific community though has had ever since a critical thought about it. Therefore, there are still points to be clarified and rectified. The two main problems are basically: Bohmian mechanics gives a privilege role to the position representation. Secondly, the current interpretation of Bohmian trajectories has been recently proven wrong.
In this context, in Chapter 2, new complex Bohmian quantities are defined; so that they allow the capacity to formulate Bohmian mechanics in any arbitrary continuous representation, for instance, the momentum representation. This Chapter is fully based on two articles, regarding the proposed complex Bohmian formulation and its extension into momentum space.
Chapter 3 deals with a redefinition and reinterpretation of the Bohmian trajectories from the handling of the continuity equation, this is done without any need of additional postulates or interpretations. Also, it is proved that Bohmian mechanics is actually more than a projective aspect of the Wigner function.
As a third point, Chapter 4 presents a sytematic treatment of the hydrodynamic scheme of Bohmian mechanics. Then, a brief summary of the transport equations in Bohmian mechanics is done. Next, a unified hydrodynamic treatment is found for the Bohmian mechanics. This treatment is useful to sketch, a Bohmian treatment to efficiently find the steady value of the transmission integral.
In Chapter 5 conclusions of this thesis are drawn.
The realization of a fast and robust closed orbit feedback (COFB) system for the on-ramp orbit correction at SIS18 synchrotron of FAIR project is reported in this thesis. SIS18 has some peculiar behaviors including on-ramp optics variation, very short lengths of the ramps (200 ms to 1 s) and a cycle-to-cycle variation of beam parameters. The realized fast COFB system being robust against above mentioned features of SIS18 is a first of its kind and the course to its realization led to some novel contributions in the field of closed orbit correction. A new method relying on the discrete Fourier transform (DFT)-based decomposition of the orbit response matrix (ORM) has been introduced, exploiting the symmetry in the arrangement of beam position monitors (BPMs) and the corrector magnets in the synchrotrons. A nearest-circulant approximation has also been introduced for synchrotrons having slight deviation from the symmetry, making the method applicable to a vast majority of synchrotrons. Moreover, the performance and the stability analysis of COFB systems in the presence of ORM mismatch between the synchrotron and the feedback controller is presented. The COFB systems are divided into slow and fast regimes and a new stability criterion consistent with measurements, is introduced. The practicality of the criterion is verified experimentally at COSY Jülich and is used for the analysis of various sources of ORM mismatch at SIS18. The commissioning of the SIS18 COFB system is also reported in detail which relies on Libera Hadron as the main hardware resource for the controller implementation. The on-ramp orbit correction is demonstrated for the horizontal plane of SIS18, for the disturbance rejection up to 600 Hz.
The requirement of the versatile signal generator has always been evident in modern RF and communication systems. The most conventional technique, voltage control oscillator (VCO), has inferior phase noise and narrow bandwidth despite its operating frequency can be up to the sub-THz regime. Its phase noise influenced by a various parameter associated with the oscillator circuit e.g. transistor size \& noise, bias current, noise leaking from the bias supply etc. The bandwidth is limited because the input voltage \& the output frequency of the VCO is not strictly linear over the tuning range. The phase noise and SFDR of the VCO output are enhanced by using the phase-lock technique. The phase-locked loop (PLL) uses the feedback system locking the reference frequency set by the VCO. However, the settling time of the PLL is higher due to a feedback control loop. The higher settling time increases the frequency switching time between PLL outputs. IG-oscillators is suitable for multi-GHz range and wide bandwidth application. Signal generation can alos be achieved by the free-electron radiation, optical lasers, Gunn diodes as well and they can operate even at the THz domain. All these signal generators suffer from slow frequency switching, lack of digital controllability, and advance modulation capability even though their frequency of operation is THz regime. Alternatively, the AWG (arbitrary wave generator) can produce a wide range of frequencies with low phase noise, including digital controllability. One of the vital components of the AWG is the direct digital synthesiser (DDS). Generally, it is composed of a phase accumulator, digital to analogue converter, sine mapping circuits and low pass filter. It needs a reference clock that acts as samples of the DDS outputs. Its output frequency can be varied by applying an appropriate digital input code. But high-speed DDS has several limitations; such as low number of output frequency points, lack of phase control unit, high power consumptions etc. This work addresses such limitations.
Diese Thesis befasst sich mit dem Problem korrelierter Elektronensysteme in realen Materialien. Ausgangspunkt hierbei ist die quantenmechanische Beschreibung dieser Systeme im Rahmen der sogenannten Kohn-Scham Dichtefunktionaltheorie, welche die Elektronen der Kristallsysteme als effektiv nicht-wechselwirkende Teilchen beschreibt.
Während diese Modellierung im Falle vieler Materialklassen erfolgreich ist, unterscheiden sich die korrelierten Elektronensysteme dadurch, dass der kollektive Charakter der Elektronendynamik nicht zu vernachlässigen ist.
Um diese Korrelationseffekte genauer zu untersuchen, verwenden wir in dieser Arbeit das Hubbard-Modell, welches mit der projektiven Wannierfunktionsmethode aus der Kohn-Scham Dichtefunktionaltheorie konstruiert werden kann.
Das Hubbard-Modell umfasst hierbei nur die lokale Elektron-Elektron-Wechselwirkung auf einem Gitter. Auch wenn das Modell augenscheinlich sehr simpel ist, existieren exakte Lösungen nur in bestimmten Grenzfällen. Dies macht die Entwicklung approximativer Ansätze erforderlich, wobei die Weiterentwicklung der sogenannten Two-Particle Self-Consistent Methode (TPSC) eine zentrale Rolle dieser Arbeit einnimmt.
Bei TPSC handelt es sich um eine Vielteilchenmethode, die in der Sprache funktionaler Ableitungen und sogenannter conserving approximations hergeleitet werden kann.
Der zentrale Gedanke dabei ist, den effektiven Wechselwirkungsvertex als statisch und lokal zu approximieren. Dies wiederum erlaubt die Bewegungsgleichung des Systems
erheblich zu vereinfachen, sodass eine numerische approximative Lösung des Hubbard-Modells möglich wird. Vorsetzung hierbei ist nur, dass sich das System in der normalleitenden Phase befindet und die bei Phasenübergängen entstehenden Fluktuationen nicht zu groß sind.
Während diese Methode ursprünglich von Y. M. Vilk und A.-M. Tremblay für das Ein-Orbital Hubbard-Modell entwickelt wurde, stellen wir in dieser Arbeit eine Erweiterung auf Viel-Orbital-Systeme vor.
Im Falle mehrerer Orbitale treten in der TPSC-Herleitung einzelne Komplikationen auf, die mit weiteren Approximationen behandelt werden müssen. Diese werden anhand eines einfachen Zwei-Orbital Modell-Systems diskutiert und die TPSC-Ergebnisse werden darüber hinaus mit den Ergebnissen der etablierten dynamischen Molekularfeldnährung verglichen.
In diesem Zusammenhang werden auch mögliche zukünftige Erweiterungen bzw. Verbesserungen von TPSC diskutiert.
Ein weiterer wichtiger Aspekt ist die Anwendung von TPSC auf reale Materialien.
In diesem Zusammenhang werden in dieser Arbeit die supraleitenden Eigenschaften der organischen K-(ET)2X Systeme untersucht. Hierbei lassen die TPSC-Resultate darauf schließen, dass das populäre Dimer-Modell, welches zur Beschreibung dieser Materialien herangezogen wird, nicht genügt um die experimentell bestimmten kritischen Temperaturen zu erklären und dass das komplexere Molekülmodell weitere exotische supraleitende Lösungen zulässt.
Schließlich untersuchen wir außerdem die elektronischen Eigenschaften des eisenbasierten Supraleiters LiFeAs und diskutieren inwieweit nicht-lokale Korrelationseffekte, welche durch TPSC aufgelöst werden können, die experimentellen Daten reproduzieren.
This dissertation presents the development of a new radio frequency quadrupole (RFQ) structure of the 4-rod type with an operating frequency of 108 MHz for the acceleration of heavy ions with mass-to-charge ratios of up to 8.5 at high duty cycles up to CW operation ("continuous wave") at the High Charge Injector (HLI) of the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt.
The need to develop a completely new RFQ for the HLI arises from the fact that with the previously designed and built 4-rod RFQ structure, which was commissioned at the HLI in 2010 as part of the planned HLI upgrade program, the desired operating modes in both pulsed and CW operation could not be achieved even after several years of operating experience and considerable efforts to eliminate or at least mitigate the severe operational instabilities. Mechanical vibrations of the electrodes, which result in strong modulated power reflection, as well as the high thermal sensitivity proved to be particularly problematic.
In addition to the RF design of the new RFQ by simulations performed with the CST Microwave Studio software, the focus of the investigations fell on the mechanical analysis of vibrations on the electrode rods caused by RF operation, for which the ANSYS Workbench software was used. Due to the high thermal load of the RFQ structure of more than 30 kW/m in CW operation, an accurate analysis of the thermal effects on electrode deformation as well as resulting frequency detuning of the resonator is also required, which was investigated by simulations within the capabilities of CST Mphysics Studio.
Based on the results of the design studies carried out by simulations and the thereby achieved design optimizations, a 4-rod RFQ prototype with 6 stems was finally manufactured, on which most of the properties expected from the simulations could be validated by measurements of the RF characteristics as well as of the vibration behavior.
Finally, based on the results of the pre-tests and considering a newly developed beam dynamics concept, a completely revised RF design for a new full-length HLI-RFQ was derived from the prototype design.
Die vorliegende Arbeit beschreibt die Erzeugung und Charakterisierung verschiedenartiger piezoresistiver Dünnschichten für die Druck- und Dehnungssensorik bei hohen Temperaturen, die mittels Sputterdeposition abgeschieden werden:
- metallische Schichten aus Chrom mit Verunreinigungen aus Sauerstoff, Stickstoff oder Platin,
- granulare Keramik-Metall-Schichten (Cermets), mit Platin oder Nickel als Metallkomponente und Aluminiumoxid (Al2O3) oder Bornitrid (BN) als Keramikkomponente.
Beide Schichttypen können mit geeigneten Beschichtungsparametern erhebliche piezoresistive Effekte aufweisen, also einen Widerstands-Dehnungs-Effekt, der den von typischen Metallschichten um ein Mehrfaches übersteigt. Der Effekt wird quantifiziert durch den k-Faktor, der die relative Änderung des Widerstands R auf die relative Änderung der Länge l, d.h. die Dehnung ε=Δl/l, bezieht: k=ΔR/(R ε).
In Beschichtungsreihen werden die Schichtzusammensetzung und die Depositionsbedingungen variiert und die Auswirkungen auf den elektrischen Widerstand, dessen Temperaturkoeffizienten (TKR), sowie den k-Faktor untersucht. Die k-Faktoren der chrombasierten Schichten liegen bei 10 bis 20 mit um null einstellbarem TKR. Die Cermet-Schichten erreichen je nach Material k-Faktoren von 7 bis über 70 mit meist stark negativen TKR von mehreren -0,1 %/K.
Die Chrom- und Chrom-Stickstoff-Schichten erweisen sich als geeignete Sensorschichten für Membran-Drucksensoren. Daher wird eine Reihe von Sensoren mit Wheatstone-Messbrücken erzeugt und charakterisiert. Sie zeigen den hohen k-Faktoren entsprechende hohe Signalspannen. Die guten Sensoreigenschaften bleiben auch bei hohen Temperaturen bis 230 °C erhalten.
Nach den ersten Untersuchungen bei Dehnungen bis maximal 0,1 % wird zusätzlich das Verhalten der Schichten bei höheren Dehnungen bis 1,4 % untersucht. Es zeigt sich vorwiegend ein lineares Widerstands-Dehnungs-Verhalten. Die Leiterbahnen der spröden chrombasierten Schichten werden bei Dehnungen um 0,7 % jedoch durch Risse zerstört, die sich von den Rändern der Schicht her ausbreiten.
Die Platin-Aluminiumoxid-Schicht zeigt einen enorm großen, nichtlinearen Widerstands-Dehnungs-Effekt, der auf Risse zurückgeführt werden kann, die sich nach einigen Belastungszyklen reproduzierbar öffnen und schließen.
Tieftemperaturmessungen von 2 bis 300 K zeigen Widerstandsminima der Chrom-Stickstoff-Schichten; Magnetwiderstandsmessungen deuten jedoch nicht auf den Kondo-Effekt hin.
Die Cermet-Schichten zeigen thermisch aktivierte Leitfähigkeit.
Ausgewählte Schichten werden bei Temperaturen bis 420 °C (693 K) charakterisiert. Die chrombasierten Schichten haben bei hohen Temperaturen stabile Widerstände, zeigen jedoch stark nichtlineare Temperaturverläufe von Widerstand und k-Faktor. Oberhalb einer gewissen Temperatur verschwindet der piezoresistive Effekt, kehrt jedoch beim Abkühlen zurück. Die Verläufe lassen sich durch die Schichtzusammensetzung und auch durch Temperaturbehandlungen modifizieren.
Die Platin-Aluminiumoxid-Schicht ist ebenfalls temperaturstabil und zeigt geringe Änderungen des k-Faktors im Temperaturverlauf. Platin-Bornitrid zeigt große, reversible Widerstandsänderungen bei höheren Temperaturen, die auf mögliche Gaseinlagerungen hindeuten.
Aus den experimentellen Ergebnissen lassen sich die Ursachen der Piezoresistivität ableiten: Die chrombasierten Schichten bilden, wie in der Literatur vielfach beschrieben, unterhalb einer Ordnungstemperatur einen Spindichtewellen-Antiferromagnetismus aus. Dieser Zustand führt zu einem zusätzlichen Widerstandsbeitrag, der die beschriebenen Nichtlinearitäten der Widerstands-Temperatur-Verläufe verursacht und zudem empfindlich auf mechanische Dehnung reagiert und so zu erhöhten k-Faktoren führt.
Die Piezoresistivität der Cermet-Schichten resultiert aus der granularen Struktur, in der Ladungsträger zwischen Metallpartikeln tunneln. Mit exponentiell vom Partikelabstand abhängigen Widerständen der Tunnelübergänge resultieren hohe k-Faktoren. Mithilfe von Modellbetrachtungen, in denen Gleichungen für Tunnelwiderstände auf granulare Systeme angewendet werden, werden die experimentellen Ergebnisse diskutiert. Die Ergebnisse deuten darauf hin, dass sich die Eigenschaften der Keramik vorrangig auf den Betrag der k-Faktoren auswirken und die Eigenschaften des Metalls vor allem den TKR beeinflussen.
Für die vorliegende Arbeit wurden zur Analyse des Auger-Zerfalls kleiner Moleküle nach Photoionisation die aus der Zerfallsreaktion resultierenden Impuls- und Energiespektren von Photo- und Auger-Elektronen in Koinzidenz mit denen der ionischen Fragmente aufgenommen. Dies ermöglichte eine getrennte Betrachtung der während des Ionisationsschrittes und des Zerfallsschrittes dieses Prozesses besetzten Molekülzustände. Um weitere Einsicht in die Dynamik des Zerfalls zu erhalten, wurden vorhandene theoretische Modelle, welche insbesondere die Interaktion der durch die Reaktion produzierten geladenen Teilchen (Post Collision Interaction) einbeziehen, an die gemessenen Energiespektren angepasst. Dies ermöglichte die separate Betrachtung der im Ionisationsschritt besetzten Molekülzustände. So konnten die Emissionswinkelverteilungen der Photoelektronen im molekülfesten Koordinatensystem für jeden besetzten Anfangszustand einzeln betrachtet werden. Die Trennung der Endzustände des Zerfalls erfolgte über die Analyse des Spektrums der Ionen-Aufbruchsenergie (Kinetic Energy Release) und den Vergleich mit berechneten Potentialkurven der beitragenden Endzustände.
Durch die nach den Anfangszuständen separierte Betrachtung des Auger-Zerfalls wurde es auch möglich, die Auswirkungen dieser Zustände auf die Zerfallsdynamik zu analysieren. Dafür lieferte die Anpassung der Modellprofile die Lebensdauer des jeweiligen 1s-Lochzustandes in dem entsprechenden Zerfallskanal. Diese jeweiligen Lebensdauern eines jeden Zustandes wurden abhängig von verschiedenen Parametern mit einer Genauigkeit im Attosekunden-Bereich aus den Energiespektren der Photoelektronen ermittelt.
In the last two decades, new unpredicted charmonium-like states with extraordinary characteristics have been observed experimentally. These states also known as the XYZ states, e.g., the Y(4260) or the X(3872), are mostly interpreted as QCD allowed exotic hadrons. One of the leading hadron physics experiments in the world, the Beijing Electron Spectrometer III (BESIII) at the Beijing Electron-Positron Collider II (BEPCII) is aiming towards revealing the internal structure of these states. It has brought numerous breakthrough discoveries including the discovery of the charged Zc(3900). In order to understand the nature of the Y(4260) state and its decay patterns, an inclusive analysis is performed for different recoil systems (π+π−,K+K− and K±π∓) using the BESIII data samples for center of mass energies above 4 GeV collected between 2013 and 2019. The aim of this analysis is twofold: on one hand, we search for new unobserved charmonium-like decay channels using the missing mass technique and on the other hand, it provides an accurate inclusive cross section measurement for e+e−→X π+π−, with the X being the J/ψ, hc and ψ(2S), respectively. Two resonant structures, the Y(4220) and the Y(4390), are observed in the inclusive energy dependent Born cross section of e+e−→hc π+π−, which is consistent with the BESIII exclusive measurements. Moreover, the energy dependent cross section of e+e−→J/ψ π+π− is investigated, in which two resonances have consistently been observed with the previous BESIII exclusive studies, namely, the Y(4220) and the Y(4320). In the (K±π±) recoil system, possible Y(4260) open charm decay channels are investigated. Two enhancements are observed in the inclusive energy dependent cross section of e+e−→DD above 4.13GeV, which could possibly be the ψ(4160)and the ψ(4415).
The small photoreceptor Photoactive Yellow Protein (PYP) enters a reversible photocycle after excitation with blue light. The intermediate states are formed on timescales ranging from femtoseconds to seconds including chromophore isomerization and protonation as well as large structural rearrangements. To obtain local dynamic information the vibrational label thiocyanate (SCN) can be inserted site-specifically at any desired position in the protein by cysteine mutation and cyanylation. The label's CN stretch vibration is highly sensitive to polarity, hydrogen bonding interactions and electric fields and is spectrally well separated from the overlapping protein absorptions. During the course of this thesis it was impressively demonstrated that the successful incorporation of the SCN label at selected positions in PYP provides a powerful tool to study structure changes and dynamics during the photocycle and enhance the local information that are obtained by infrared (IR) spectroscopic methods. Hence the SCN-labeled protein mutants were studied under equilibrium (steady-state) and non-equilibrium conditions.
Examination of the SCN absorption by FTIR spectroscopy showed the influence of various local environments on the label for different locations in the dark state. The response of the label under illumination with blue light reveals information about structural changes in the signaling state. Additional information for both states were obtained by the vibrational lifetime of the CN vibration measured via ultrafast IR-pump-IR-probe experiments. This observable is particularly sensitive for solvent exposure of the label. Time-resolved IR spectroscopy proved to be an excellent method to follow the protein dynamics throughout most part of the photocycle on a hundreds of femtoseconds to milliseconds timescale. By close inspection of protein and chromophore dynamics in wildtype-PYP over nine decades in time, new insights into the changes leading to the proposed photocycle intermediates were obtained. The investigation of the SCN label allowed to follow the different transient structure changes with high local resolution. Depending on its position within the protein the response of the label provided additional information on the photocycle transitions.
The insights that are obtained by the different observables in the steady-state and by the reaction of the SCN label to formation of the different intermediate states during the photocycle contribute to an improved understanding of local, light-induced structure changes in the photoreceptor PYP. This comprehensive study demonstrated the potential provided by the application of SCN as IR label for investigation of protein dynamics.
Hofstadter-Hubbard physics
(2020)
The Hofstadter model, besides the Haldane and Kane-Mele models, is the most common tight-binding model which hosts topologically nontrivial states of matter. In its time-reversal-symmetric formulation the model can even describe topological insulators. Experimentally, the Hofstadter model was realized with ultracold quantum gases in optical lattices which is a wellcontrolled way to engineer quantum states of tight-binding Hamiltonians. Another established control parameter in ultracold quantum gases are twoparticle, on-site interactions, also known as Hubbard interactions. This work aims at introducing the reader to the concepts of topological states of matter, a collection of corresponding tight-binding models, and the methodology to treat interacting topological states with dynamical mean-field theory.We present recent results for inhomogeneous, interacting systems, spinimbalanced magnetic systems, propose experimental detection methods, and extensions to three-dimensional topological states.
Understanding the hadron spectrum is one of the primary goals of non-perturbative QCD. Many predictions have experimentally been confirmed, others still remain under experimental investigation. Of particular interest is how gluonic excitations give rise to states with constituent glue. One class of such states are hybrid mesons that are predicted by theoretical models and Lattice QCD calculations. Searching for and understanding the nature of these states is a primary physics goal of the GlueX experiment at the CEBAF accelerator at Jefferson Lab. A search for a JPC = 1−− hybrid meson candidate, the Y(2175), in φ(1020)π+π+ and φ(1020)f0(980) channels in photoproduction on a proton target has been conducted. A first measurement of non-resonant φ(1020)π+π+ and φ(1020)f0(980) total cross sections in photoproduction has been performed. An upper limit on the resonance production cross section for the Y (2175) → φ(1020)π+π+ and Y (2175) → φ(1020)f0(980) channels are estimated. Since the analysis essentially depends on the quality of the charged kaon identification, also an optimization of particle identification through an improvement of the energy loss estimation in the central drift chamber by a truncated mean method has been investigated.
Proteine sind die Maschinen der Zellen. Um die Funktionalität von zahlreichen zellulären Prozessen zu gewährleisten, müssen Kommunikationssignale innerhalb von Proteinen weitergeleitet werden. Die Weiterleitung einer Störung an einem Ort im Protein zu einer entfernten Stelle, an welcher sie strukturelle und/oder dynamische Änderungen auslöst, wird Allosterie genannt. Zunächst wurde Allosterie hauptsächlich mit großräumigen Konformationsänderungen in Verbindung gebracht, aber später entwickelte sich ein dynamischerer Blickwinkel auf Allosterie in Abwesenheit dieser großräumigen Konformationsänderungen. Die Idee eines allosterischen Pfades bestehend aus konservierten und energetisch gekoppelten Aminosäuren, welche die Signalweiterleitung zwischen entfernten Stellen im Protein vermitteln, entstand. Diese allosterischen Pfade wurden durch zahlreiche theoretische Studien in Zusammenhang mit Pfaden effizienten anisotropen Energieflusses gebracht. Der Energiefluss entlang dieser Netzwerke verknüpft allosterische Signalübertragung mit Schwingungsenergietransfer (VET - vibrational energy transfer). Die Großzahl der Forschungsarbeiten über dynamische Allosterie basiert auf theoretischen Methoden, weil nur wenige geeignete experimentelle Verfahren existieren. Um diesen essentiellen biologischen Prozess der Informationsübertragung besser verstehen zu können, ist die Entwicklung neuer und leistungsstarker experimenteller Instrumente und Techniken daher dringend erforderlich. Die vorliegende Dissertation setzt sich dies zum Ziel.
VET in Proteinen ist aufgrund der Proteingeometrie inhärent anisotrop. Alle globulären Proteine besitzen Kanäle effizienten Energieflusses, von denen vermutet wird, dass sie wichtig für Proteinfunktionen, wie die schnelle Ableitung von überschüssiger Wärme, Ligandenbindung und allosterische Signalweiterleitung, sind. VET kann mit zeitaufgelöster Infrarot (IR) Spektroskopie untersucht werden, bei welcher ein Femtosekunden Anregepuls eines Lasers Schwingungsenergie in ein molekulares System an einer bestimmten Stelle injiziert und ein, nach einem veränderbarem Zeitintervall folgender, IR Abfragepuls die Ausbreitung dieser Schwingungsenergie detektiert. Ein protein-kompatibler und universell einsetzbarer Chromophor, der die Energie eines sichtbaren Photons in Schwingungsenergie konvertiert, wird als Heizelement benötigt um langreichweitige VET Pfade in Proteinen kartieren zu können. Der Azulen (Azu) Chromophor eignet sich dafür, weil er nach Photoanregung des ersten elektronischen Zustandes durch ultraschnelle interne Konversion fast die gesamte injizierte Energie innerhalb von einer Picosekunde in Schwingungsenergie umwandelt. Eingebettet in die nicht-kanonische Aminosäure (ncAA - non-canonical amino acid) ß-(1-Azulenyl)-L-Alanine (AzAla), kann der Azu Rest in Proteine eingebaut werden. Die Ankunft der injizierten Schwingungsenergie an einer bestimmten Stelle im Protein kann mithilfe eines IR Sensors detektiert werden. Die Kombination aus Azu als VET Heizelement und Azidohomoalanine (Aha) als VET Sensor mit transienter IR (TRIR) Spektroskopie wurde schon erfolgreich an kleinen Peptiden in der Dissertation von H. M. Müller-Werkmeister getestet, die der vorliegenden Dissertation in den Laboren der Bredenbeck Gruppe vorausging.
Die Schwingungsfrequenz chemischer Bindungen ist hochempfindlich auf selbst kleine Änderungen der Konformation und Dynamik in der unmittelbaren Umgebung und kann mit IR Spektroskopie gemessen werden, z. B. mit Fourier Transform IR (FTIR) Spektroskopie. IR Spektroskopie bietet eine außergewöhnlich gute Zeitauflösung, die es ermöglicht, dynamische Prozesse in Molekülen auf einer Zeitskala von wenigen Picosekunden zu beobachten, wie z. B. die ultraschnelle Weiterleitung von Schwingungsenergie. Mit zweidimensionaler (2D)-IR Spektroskopie können die Relaxation von schwingungsangeregten Zuständen und strukturelle Fluktuationen um die schwingende Bindung untersucht werden. Allerdings geht die herausragende Zeitauflösung mit limitierter spektraler Auflösung einher. In größeren Molekülen mit zahlreichen Bindungen überlagern sich die Schwingungsbanden und die Ortsauflösung geht verloren. Um diese Limitierung zu überwinden, können IR Marker benutzt werden, chemische Gruppen, die in einer spektral durchsichtigen Region des Protein/Wasser Spektrums (1800 bis 2500 cm-1) absorbieren. Als ncAA können sie kotranslational in Proteine an einer gewünschten Stelle eingebaut werden und so ortsspezifische Informationen aus dem Proteininneren liefern. Aufgrund ihrer geringen Größe, eines relativ großen Extinktionskoeffizientens (350-400 M-1cm-1) und einer hohen Empfindlichkeit auf Änderungen in der lokalen Umgebung sind organische Azide (N3) wie zum Beispiel Aha besonders geeignete IR Marker. Aha kann als Methionin Analogon ins Protein eingebaut werden.
...
This Dissertation deals with the development of FAIR-relevant X-ray diagnostics based on the interaction of lasers and particle beams with matter. The associated experimental methods are supposed to be employed in the HIHEX-experiments in the HHT-cave of the GSI Helmholtz Center for Heavy-Ion Research GmbH (GSI) in Phase-0 and in the APPA-cave at the Facility for Antiproton and Ion Research in Darmstadt, Germany.
Diagnostic of high aerial density targets that will be used in FAIR experiments demands intense and highly penetrating X-ray sources. Laser generated well-directe relativistic electron beams that interact with high Z materials is an excellent tool for generation of short-pulse high luminous sources of MeV-gammas.
In pilot experiments carried out at the PHELIX laser system, GSI Darmstadt, relativistic electrons were produced in a long scale plasma of near critical electron density (NCD) by the mechanism of the direct laser acceleration (DLA). Low density polymer foam layers preionised by a well-defined nanosecond laser pulse were used as NCD targets. The analysis of the measured electron spectra showed up to 10- fold increase of the electron "temperature" from T_Hot = 1–2 MeV, measured for the case of the interaction of 1–2 ×10^19 Wcm^(−2) ps-laser pulse with a planar foil, up to 14 MeV for the case when the relativistic laser pulse propagates through the by a ns-pulse preionised foam layer. In this case, up to 80–90 MeV electron energy was registered. An increase of the electron energy was accompanied by a strong increase of the number of relativistic electrons and well-defined directionality of the relativistic electron beam measured to be (12 ±1)° (FWHM). This directionality increases the gamma flux on target by far compared to the soft X-ray sources.
Additionally to laser based active diagnostics, passive techniques involving inherent X-ray fluorescence radiation of projectile and target emitted during heavy-ion target interaction can be used to measure the ion beam distribution on shot. This information is of great importance, since the target size is chosen to be smaller than the beam focus in order to ensure homogeneous heating of the HIHEX-target by the ion beam. High amounts of parasitic radiation and activation of experimental equipment is expected for experiments at the APPA-cave. For this reason, all electronic devices must be placed at a safe distance to the target chamber. In order to transport the signal over a large distance, the X-ray image of the target irradiated by heavy-ions has to be converted into an optical one.
For these purposes, the X-ray Conversion to Optical radiation and Transport (XCOT)-system was developed in the frame of a BMBF-project and commissioned in two beamtimes at the UNILAC, GSI during this work.
In experiments, we observed intense radiation of target atoms (K-shell transitions in Cu at 8–8.3 keV and L-shell transition in Ta) ionised in collisions with heavy ions as well as Doppler-shifted L-shell transitions of Au-projectiles passing through targets. This radiation can be used for monochromatic (dispersive elements like bent crystals) or polychromatic (pinhole) 2D X-ray mapping of the ion beam intensity distribution in the interaction region during the beam-target interaction. We measured the efficiency of the X-ray photon production depending on the target thickness and the number of ions passing through the target. The spatial resolution of the XCOT-system based on the multi-pinhole camera was measured to be (91±17) μm for the image magnification factor M = 2. It was considerably improved by application of a toroidally bent quartz crystal and reached 30 μm at M = 6. This resolution is optimal to image the distribution of a 1mm in diameter ion beam. As next step, the XCOT-system will be tested during the SIS18 beam-time at the HHT-experimental area.
Im Rahmen dieser Arbeit wurde ein Reaktionsmikroskop (REMI) nach dem Messprinzip COLTRIMS (Cold Target Recoil Ion Momentum Spectrometry) neu konstruiert und aufgebaut. Die Leistungsfähigkeit des Experimentaufbaus konnte sowohl in diversen Testreihen als auch anschließend unter realen Messbedingungen an der Synchrotronstrahlungsanlage SOLEIL und am endgültigen Bestimmungsort SQS-Instrument (Small Quantum Systems) des Freie-Elektronen-Lasers European XFEL (X-ray free-electron laser) eindrucksvoll unter Beweis gestellt werden.
Mit der Experimentiertechnik COLTRIMS ist es möglich, alle geladenen Fragmente einer Wechselwirkung eines Projektilteilchens mit einem Targetteilchen mittels zweier orts- und zeitauflösender Detektoren nachzuweisen. In einem Vakuumrezipienten wird die als Molekularstrahl präparierte Targetsubstanz inmitten der Hauptkammer zentral mit einem Projektilstrahl (z.B. des XFEL) zum Überlapp gebracht, sodass dort eine Wechselwirkung stattfinden kann. Bei den entstehenden Fragmenten handelt es sich um positiv geladene Ionen sowie negative geladene Elektronen. Elektrische Felder, erzeugt durch eine Spektrometer-Einheit, sowie durch Helmholtz-Spulen erzeugte magnetische Felder ermöglichen es, die geladenen Fragmente in Richtung der Detektoren zu lenken. Die Orts- und Zeitmessung eines einzelnen Teilchens (z.B. eines Ions) findet in Koinzidenz mit den anderen Teilchen (z.B. weiteren Ionen bzw. Elektronen) statt. Mit dieser Messmethode können die Impulsvektoren und Ladungszustände aller geladenen Fragmente in Koinzidenz gemessen werden. Da hierbei die geometrische Anordnung der einzelnen Komponenten für die Leistungsfähigkeit des Experiments eine entscheidende Rolle spielt, mussten bei der Neukonstruktion des COLTRIMS-Apparates für den Einsatz an einem Freie-Elektronen-Laser (FEL) einige Rahmenbedingungen erfüllt werden. Besonders wurden die hohen Vakuumvoraussetzungen an den Experimentaufbau aufgrund der enormen Lichtintensität eines FEL beachtet. Das Zusammenspiel der vielen Einzelkomponenten konnte zunächst in mehreren Testreihen überprüft werden. Unter anderem durch Variation der Vakuumbauteile in Material und Beschaffenheit konnten die zuvor ermittelten Vorgaben schließlich erreicht werden. Das neu konstruierte Target-Präparationssystem zur Erzeugung molekularer Gasstrahlen erlaubt nun den Einsatz von bis zu vier unterschiedlich dimensionierten, differentiell gepumpten Stufen. Zudem wurden hochpräzise Piezo-Aktuatoren verbaut, welche die Bewegung von Blenden im Vakuum erlauben, wodurch eine variable Einstellung des lokalen Targetdrucks ermöglicht wird. Die Anpassung der elektrischen Felder des Spektrometers für ein jeweiliges Experiment wurde mittels Simulationen der Teilchentrajektorien, Teilchenflugzeiten sowie der Detektorauflösung durchgeführt.
Da die in dieser Arbeit besprochenen Messungen und Ergebnisse die Wechselwirkungsprozesse von Röntgenstrahlung bzw. Synchrotronstrahlung mit Materie thematisieren, wird die Erzeugung von Synchrotronstrahlung sowohl in Kreisbeschleunigern als auch in den modernen Freie-Elektronen-Lasern (FEL) erklärt und hergeleitet. Der im Röntgenbereich arbeitende Freie-Elektronen-Laser European XFEL, welcher u.A. als Strahlungsquelle für die hier gezeigten Experimente diente, ist eine von derzeit noch wenigen Anlagen ihrer Art weltweit. Seine Lichtintensität in diesem Wellenlängenbereich liegt bis zu acht Größenordnungen über den bisher verwendeten Anlagen für Synchrotronstrahlung.
Beim ersten Einsatz der neuen Apparatur an der Synchrotronstrahlungsanlage SOLEIL wurde der ultraschnelle Dissoziationsprozess von Chlormethan (CH3Cl) untersucht. Während des Zerfallsprozesses nach Anregung durch Röntgenstrahlung werden hochenergetische Auger-Elektronen emittiert, welche in Koinzidenz mit verschiedenen Molekülfragmenten nachgewiesen wurden. Durch den Zerfallsmechanismus der ultraschnellen Dissoziation wird die Auger-Elektronenemission nach resonanter Molekülanregung während der Dissoziation des Moleküls beschrieben. Die kinetische Energie des Auger-Elektrons ist dabei abhängig von seinem Emissionszeitpunkt. Somit können die gemessenen Auger-Elektronen ein „Standbild“ der zeitlichen Abfolge des Dissoziationsprozesses liefern.
Es wird eine detaillierte Beschreibung der Datenanalyse vorgenommen, welche aus Kalibrationsmessungen und einer Interpretation der Messdaten besteht. Die abschließende Betrachtung besteht in der Darstellung der Elektronenemissionswinkelverteilungen im molekülfesten Koordinatensystem. Die Winkelverteilung der Auger-Elektronen wird am Anfang der Dissoziation vom umgebenden Molekül- potential beeinflusst und zeigt deutliche Strukturen entlang der Bindungsachse. Entfernen sich die Bindungspartner voneinander und das Auger-Elektron wird währenddessen emittiert, so verschwinden diese Strukturen zunehmend und eine Vorzugsemissionsrichtung senkrecht zur Molekülachse wird sichtbar.
Die Analyse der Messdaten zur Untersuchung von Multiphotonen-Ionisation an Sauerstoff-Molekülen am Freie-Elektronen-Laser European XFEL ermöglichte unter anderem die Beobachtung „hohler Moleküle“, also Systemen mit Doppelinnerschalen- Vakanzen. Solche Zustände können vor allem durch die sequentielle Absorption zweier Photonen entstehen, wobei die hierbei nötige Photonendichte nur von FEL- Anlagen bereit gestellt werden kann. Hier konnte das Ziel erreicht werden, erstmalig die Emissionswinkelverteilungen der Photoelektronen von mehrfach ionisierten Sauerstoff-Molekülen (O+/O3+-Aufbruchskanal) als Folge der ablaufenden Mechanismen femtosekundengenau zu beobachten. Hierzu wurde ein vereinfachtes Schema der verschiedenen Zerfallsschritte erstellt und schließlich ermittelt, dass der Zerfall durch eine PAPA-Sequenz beschrieben werden kann. Bei dieser handelt es sich um die zweimalige Abfolge von Photoionisation und Auger-Zerfall. Somit werden vier positive Ladungen im Molekül erzeugt. Das zweite Photon des XFEL wird dabei während der Dissoziation der sich Coulomb-abstoßenden Fragmente absorbiert, weshalb es sich um einen zweistufigen Prozess aus Anrege- und Abfrage- Schritt (Pump-Probe) handelt. Schlussendlich gelang zudem der Nachweis von Doppelinnerschalen-Vakanzen im Sauerstoff-Molekül nach Selektion des O2+/O2+- Aufbruchkanals. Hierfür konnten die beiden Möglichkeiten einer zweiseitigen oder einseitigen Doppelinnerschalen-Vakanz getrennt betrachtet werden und ebenfalls erstmalig das Verhalten der Elektronenemission dieser beiden Zustände verglichen werden.
In this thesis different descriptions for the non-Abelian Landau-Pomeranchuk-Migdal (LPM) effect are studied within the partonic transport approach BAMPS (Boltzmann Approach to Multi-Parton Scatterings), which numerically solves the 3+1-dimensional Boltzmann equation for massless partons based on elastic and radiative interactions calculated in perturbative quantum chromodynamics.
The LPM effect is a coherence effect originating from the finite formation time of gluon emissions leading to characteristic dependencies of the radiative energy loss of energetic partonic projectiles, as e.g. jets in ultra-relativistic heavy-ion collisions.
Due to this non-locality of interactions, such coherence effects are difficult to describe rigorously in transport theory.
Therefore we compare in this work three different implementations for the LPM effect: i) a parametric LPM suppression based on a theta function in the radiative matrix elements, ii) a stochastic LPM approach, which explicitly simulates the elastic interactions of gluons during their formation time, and iii) the thermal gluon emission rate from the AMY formalism, which is a hard-thermal-loop calculation exactly considering the non-Abelian LPM effect by resumming ladder diagrams in the large medium limit.
After discussing the numerical implementation of the three approaches, we investigate their consequences in different jet-energy loss scenarios: first the academic scenarios of eikonal and non-eikonal jets flying through a static brick of thermal quark-gluon plasma and then jets traversing the expanding medium of ultra-relativistic heavy-ion collisions at LHC energies.
We can demonstrate that although the different LPM approaches show similarities in the radiative energy loss there are differences in the underlying gluon emission spectra, which originate from the specific treatment of divergences in the matrix elements within BAMPS.
Furthermore, based on the different LPM approaches we present simulation results for recent jet quenching observables from the LHC experiments and discuss properties of the underlying heavy-ion medium.
This doctoral thesis is concerned with the development of a method that allows to measure in vivo and non-invasively the mid-infrared absorption spectra of human epidermis, using photoacoustic spectroscopy. The main focus is the monitoring of the glucose level in epidermal interstitial fluid and its correlation with the blood glucose level; which is the most important parameter for the diagnosis and treatment of diabetes mellitus. Most publications in this field have only reported measurements in vitro for the absorption spectra of epidermis in the mid-infrared range. Using the approach presented in this work, it was possible to record in vivo and in situ the absorption spectra of skin of volunteers; and with these spectra, the changing glucose concentration could be monitored. The novelty of the photoacoustic method introduced here is that it operates in acoustic resonance in the ultrasound range. This considerably reduces the signal noise due to the external acoustic background. Although the photoacoustic method reported in this work was used to measure glucose in human epidermis, it can also be applied to other solid samples with relevant absorption bands in the mid-infrared. Furthermore, it can be used in other spectral regions if the laser source covers relevant absorption bands of the sample.
Mit der COLTRIMS-Technik können immer kompliziertere Reaktionen untersucht werden, dabei steigt aber die Zahl der zu detektierenden Reaktionsfragmente. Der Nachweis von Ionen ist üblicherweise gut möglich, da die entsprechenden Flugzeiten groß sind im Vergleich zur Totzeit der benutzten Detektoren. Elektronen hingegen sind sehr leicht und erreichen den Detektor innerhalb von wenigen 10 ns. Aktuelle Detektoren erlauben aber nur den Nachweis weniger Elektronen und es werden somit neue Detektoren benötigt, um alle Teilchen nachzuweisen. Ziel dieser Arbeit war es also, einen Detektor zu entwickeln, der dies erreicht.
Zu Beginn dieser Monografie wird die COLTRIMS-Technik vorgestellt. Die Experimente mit dieser Messmethode finden hauptsächlich mit einer Laufzeitanode statt. Diese stößt aber bei dem Nachweis von mehreren Teilchen an ihre Grenzen und manche Experimente können nur unvollständig analysiert werden.
Damit ein neuer Detektor entwickelt werden kann, muss erst verstanden werden, wie die zu detektierenden Teilchen/Signale entstehen und wie ihre Eigenschaften sind. Aus diesem Grund wird das Sekundärteilchen-erzeugende MCP ausführlich vorgestellt.
Weiterhin gibt diese Arbeit einen umfassenden Überblick über bereits realisierte Anoden. Verschiedene Repräsentanten der fünf Anodenarten (Flächen-, Streifen-/Pixel-, Laufzeit-, Kamera-, sowie Halbleiter-Anode) werden vorgestellt und bewertet.
Mit diesem Wissen konnten drei Ansätze für neue Anoden entwickelt, designt, produziert, getestet und bewertet werden. Alle neu entwickelten Anoden benutzen Leiterplatinen als Basis und werden in derselben Vakuumkammer getestet. Auch wenn die Detektionsprinzipien der drei getesteten Detektoren unterschiedlich sind, so verläuft die Auskopplung, Verarbeitung und Digitalisierung der Signale nach dem gleichen Schema. Außerdem wurden im Rahmen dieser Arbeit diverse Algorithmen entwickelt und programmiert, mit deren Hilfe die Signalauswertung und Positionsbestimmung erfolgt.
Das dritte Kapitel beschreibt die neu entwickelte Draht-Harfen-Anode. Dieser Detektor besteht aus vielen kurzen Drähten die parallel auf Rahmen aus Leiterplatinen gespannt werden. Aus dieser Anode ließ sich im Rahmen dieser Arbeit aber kein funktionsfähiger Detektor entwickeln und es wird empfohlen, diesen Ansatz nicht weiterzuverfolgen.
Im Kapitel über die Pixel-Anode mit Streifenauslese wird ein Ansatz vorgestellt, bei dem die Elektronenwolke von einem Muster aus leitenden Rauten absorbiert wird. Es wurde ein funktionsfähiger Detektor mit MAMA-Verschaltung realisiert. Die aktive Fläche ist mit einem Durchmesser von 50 mm aber zu klein. Eine große Variante der Anode ist in der realisierten Form aber nicht als Detektor geeignet.
Als dritter neuer Detektor wird die Streifen-Laufzeit-Anode beschrieben. Diese besteht aus einem rechteckigen Muster von Pixeln, die in einer Richtung über eine Zeitverzögerung ausgelesen werden. Dieser Ansatz ist sehr vielversprechend und es ließen sich nicht nur einzelne Teilchen nachweisen, sondern auch beim Aufbruch eines D2+-Moleküls konnten beide Fragmente gemessen werden.
Das letzte Kapitel befasst sich mit weiteren Konzepten, die als Detektor realisiert werden könnten.
This thesis discusses important questions of the beam dynamics in the proton-lead operation in the Large Hadron Collider (LHC) at CERN in Geneva. In two time blocks of several weeks in the years 2013 and 2016, proton-lead collisions have so far been successfully generated in the LHC and used by the experiments at the LHC. One reason for doubts regarding the successful operation in proton-lead configuration was the fact that the beams have to be accelerated with different revolution frequencies. There is long-range repulsion between the beams, since both beams share the beam chamber around the interaction points. Because of the different revolution frequencies, the positions of the interaction between the beams shift each revolution. This can lead to resonant excitation and to an increase in the transverse beam emittance, as was observed in the Relativistic Heavy-Ion Collider (RHIC). In this thesis, simulations for the LHC, RHIC and the High-Luminosity Large Hadron Collider (HL-LHC) are performed with a new model. The results for RHIC show relative growth rates of the emittances of the gold beam in gold-deuteron operation in RHIC from 0.1 %/s to 1.5 %/s. Growth rates of this magnitude were observed experimentally in RHIC. Simulations for the LHC show no significant increase of the emittance of the lead beam for different intensities of the counter-rotating beam. The simulation results confirm the measured stability of the beams in the LHC and the issue of strongly increasing emittances in RHIC is reproduced. Also, no significant increase of the emittance is predicted for the Future Circular Collider (FCC) and the HL-LHC.
Using a frequency-map analysis, this work verifies whether the interaction of the lead beam with the much smaller proton beam in the proton-lead operation of the LHC leads to diffusion within the lead beam. Experiences at HERA at DESY in Hamburg and at SppS at CERN have shown that the lifetime of the larger beam can rapidly decrease under certain circumstances. The results of the simulation show no chaotic dynamics near the beam centre of the lead beam. This result is supported by experimental observation.
A program code has been developed which calculates the beam evolution in the LHC by means of coupled differential equations. This study shows that the growth rates of the lead beam due to intra-beam scattering is overestimated and that particle bunches of the lead beam lose more intensity than assumed in the model. The analysis also shows that bunches colliding in a detector suffer additional losses that increase with decreasing crossing angle at the interaction point.
In this work, 2016 data from beam-loss monitors in combination with the luminosity and the loss rate of the beam intensity are used to determine the cross section of proton-lead collisions at the center-of-mass energy of 8.16 TeV. Beam-loss monitors that mainly detect beam losses that are not caused by the collision process itself are used to determine the total cross section via regression. An analysis of the data recorded in 2016 at the center-of-mass energy of 8.16 TeV resulted in a total cross section of σ=(2.32±0.01(stat.)±0.20(sys.)) b. This corresponds approximately to a hadronic cross section of σ(had)=(2.24±0.01(stat.)±0.21(sys.)) b. This value deviates only by 5.7 % from the theoretical value σ(had)=(2.12±0.01) b.
The simulation code for determining the beam evolution is also used to estimate the integrated luminosity of a future one-month run with proton-lead collisions. The result of the study shows that in the future the luminosity in the ATLAS and CMS experiments will increase from 15/nb per day in 2016 to 30/nb per day, which is a significant increase in terms of the performance. This operation, however, requires the use of the TCL collimators to protect the dispersion suppressors at ATLAS and CMS from collision fragments.
This work also gives an outlook on the expected luminosity production in proton-nucleus operation using ion species lighter than lead ions. For example, a change from proton-lead to proton-argon collisions would increase the integrated luminosity from monthly 0.8/nb to 9.4/nb in ATLAS and CMS. This is an increase of one order of magnitude and approximately a doubling of the integrated nucleon-nucleon luminosity. There may be a test operation with proton-oxygen collisions in 2023, which will last only a few days and will be operated with a low luminosity. The LHCf experiment (LHCb experiment) would achieve the desired integrated luminosity of 1.5/nb (2/nb) within 70h (35h) beam time.
Chiralität ist in der belebten Natur ein omnipräsentes Phänomen und beschreibt die Symmetrieeigenschaft eines Objektes, dass dieses von seinem Spiegelbild unterscheidbar ist. Die bisherigen Untersuchungen der Wechselwirkung zwischen chiralen Molekülen und Licht fokussieren sich auf das Regime der Ein- und Multiphoton-Ionisation und wird mit dieser Arbeit um Untersuchungen im Starkfeldregime erweitert. Im Rahmen dieser Arbeit wurden Experimente an einzelnen chiralen Molekülen in starken Laserfeldern vorbereitet, durchgeführt, analysiert und alle geladenen Fragmente in Koinzidenz untersucht.
Die Präsentation der Ergebnisse orientierte sich an der Reihenfolge, in der auch die Datenauswertung von Vielteilchenaufbrüchen vonstattengeht: Zunächst wurde der Dichroismus in den Photoionen (PICD) auf chirale Signale in integraler differentieller Form untersucht, dann wurde die Asymmetrien in den Elektronenverteilungen vorgestellt und abschließend die Zusammenhänge zwischen den Ionen- und Elektronenverteilungen aufgezeigt.
Kapitel 6 untersuchte die (differentielle) Ionisations- und Fragmentationswahrscheinlichkeit von verschiedenen chiralen Molekülen. Die in Kapitel 6.1 präsentierten Daten verknüpften erstmals den bereits in der Literatur diskutierten Zirkulardichroismus in den Zählraten von Photoionen (PICD) mit dem signalstärkeren differentiellen PICD in der Einfachionisation von Methyloxiran. Dissoziiert das Molekül nach der Ionisation rasch genug, gewährt der Impulsvektor des geladenen Fragments Zugang zu einer Fragmentationsachse. Durch die Auflösung nach einer Molekülachse ist der beobachtete PICD fast eine Größenordnung stärker, als der über alle Raumrichtungen integrierte.
In steigender Komplexität wurde in Kapitel 6.2 eine Fragmentation in vier Teilchen von Molekülen aus einem racemischen Gemisch von CHBrClF untersucht. Über die Auswertung eines Spatproduktes aus den Impulsvektoren konnte für jedes Molekül dessen Händigkeit bestimmt und der vollständig differentielle PICD untersucht werden. Durch das Festhalten einer Fragmentationsachse (analog zu Kapitel 6.1) konnten um einen Faktor vier stärkere PICD-Signale und durch das Auflösen nach der vollständigen Molekülorientierung die Signalstärke des PICD um einen Faktor von etwa 16 in den Bereich einiger Prozente gebracht werden. Leider übersteigt die theoretische Beschreibung dieses Prozesses den aktuellen Stand der Forschung weit. Daher kann nicht ausgeschlossen werden, dass nicht ein Beitrag zur PICD-Signalverstärkung auch aus der Dynamik der sequentiellen vielfachen Ionisation stammt.
Die untersuchte Reaktion in Kapitel 6.3 war der Fünf-Teilchenaufbruch der achiralen Ameisensäure. In der Messung aller ionischen Fragmente konnten analog zu dem vorherigen Kapitel die internen Koordinaten sowie die Orientierung des Moleküls ermittelt werden. Tatsächlich wurde von einer chiralen Fragmentation der achiralen Ameisensäure berichtet. Welches Enantiomer in der Fragmentation beobachtet wird, hängt maßgeblich von der Molekülorientierung relativ zum ionisierenden Laserpuls ab. Diese Erkenntnis könnte zu neuen Ansätzen für Laserkatalysierte enantioselektive Reaktionen führen. Darüber hinaus konnte gezeigt werden, dass die beobachtete Händigkeit des Moleküls nicht nur von seiner Orientierung, sondern auch von der Helizität des ionisierenden Laserpulses abhängt. Dieser differentielle PICD an der Ameisensäure zeigte sich neben einer sehr großen Signalstärke von über 20 % auch als sensitive Probe für die molekulare Struktur.
In Kapitel 7 wurden die Untersuchungen an den 3-dimensionalen Impulsverteilungen der Photoelektronen vorgestellt. Zunächst wird hierzu auf die allgemeine Form des Dichroismus in den Photoelektronen (PECD) im Starkfeldregime eingegangen und die vorherrschenden Symmetrien des Ionisationsregimes herausgearbeitet (Kapitel 7.1). Mit leicht steigender Komplexität konnte eine klare Verbindung zwischen der Asymmetrie in der Elektronenverteilung und dem Schicksal des zurückbleibenden molekularen Ions anhand der Einfachionisation von Methyloxiran herausgearbeitet werden (Kapitel 7.2). Dies hat eine wichtige Auswirkung auf die Nutzbarkeit des PECD im Starkfeldregime als Analysemethode für Chemie und Pharmazie: Der über alle Fragmentationskanäle integrierte PECD ist sensitiv auf die Gewichtung der Fragmente und damit auch auf beispielsweise die maximale Laserintensität. Die Daten legen nahe, dass die Abhängigkeit des PECD von dem Fragmentationskanal auf die unterschiedliche Auswahl von Subensembles molekularer Orientierungen zurückzuführen ist.
Bei Verwendung von elliptisch polarisiertem Licht treten gegenüber der zirkularen Polarisation eine Reihe neuer Effekte auf (Kapitel 7.3). Zunächst zeigt der PECD auch im Starkfeldregime eine nicht lineare Sensitivität auf den Polarisationszustand, welche sich auch als Funktion des Elektronentransversalimpulses und dem Fragmentationskanal ändert. Somit ist die Verwendung von elliptisch polarisiertem Licht bestens für die chirale Erkennung geeignet, wie inzwischen auch in der Literatur bestätigt wurde. Darüber hinaus führt die gebrochene Rotationssymmetrie bei elliptisch polarisiertem Licht zu einer Elektronenimpulsverteilung, welche selbst chiral ist: Der PECD variiert je nach Winkel φ in der Polarisationsebene, wobei die Extrema des PECD nicht mit den Maxima der Zählraten übereinstimmen. Als neue chirale Beobachtungsgröße konnten wir eine enantiosensitive und vorwärts-/rückwärtsasymmetrische Rotation der Zählratenmaxima einführen. Als abgeleitete Größe aus derselben drei-dimensionalen Elektronenverteilung ist diese Beobachtungsgröße jedoch untrennbar verknüpft mit dem ϕ-abhängigen PECD.
Kapitel 8 verknüpfte das (partielle) Wissen um die molekulare Orientierung und den PICD mit den Asymmetrien der Elektronenverteilung für die Messung der fünffach-Ionisation von Ameisensäure (Kapitel 8.1), der vierfach-Ionisation von CHBrClF (Kapitel 8.2) und der Einfachionisation von Methyloxiran (Kapitel 8.3). Im Datensatz der Ameisensäure und dem des CHBrClF zeigte die molekulare Orientierung einen größeren Einfluss auf die Asymmetrie in der Elektronenverteilung als das Enantiomer oder die Helizität des Lichtes. Diese Verknüpfung zwischen Molekülorientierung und Elektronenasymmetrie überträgt die Asymmetrien des PICD auf die Elektronenverteilung. Die Messung an Methyloxiran relativiert diesen Zusammenhang jedoch in dem dieser in dieser Stärke nur bei manchen Fragmentationskanälen auftritt. Offenbar ist die Übertragung der Asymmetrie der differentiellen Ionisationswahrscheinlichkeit nur einer der Mechanismen, welcher zu Elektronasymmetrien im Starkfeldregime führt.
To gain a better understanding of complex mechanisms in biological systems, simultaneous control over multiple processes is key. To this purpose selective photouncaging has been developed. Photo-uncaging is an experimental scheme in which a molecule of interest has been inactivated synthetically and is activated by light. Usually a bond is cleaved and a leaving group is set free. The molecule which inactivates the molecule of interest and sets the leaving group free is called (photo-)cage. In a selective photo-uncaging scheme a number of leaving groups can be released independently, usually by irradiation with light of different wavelengths. This approach is, however, seriously limited in its applicability due to the properties of the involved cages and irradiation schemes. A major drawback is the usually quite broad UV-Vis absorption of the cages. This makes a selective activation by light difficult and limits the maximal number of independent cages severely.
Therefore, the aim of this thesis is to introduce the Vibrationally Promoted Electronic Resonance (VIPER) 2D-IR pulse sequence in a alternative selective uncaging scheme.
The VIPER 2D-IR pulse sequence is a spectroscopic tool which allows to generate 2D-IR signals whose lifetime are independent of the vibrational relaxation lifetime. It has been first used to monitor chemical exchange. It consists of a narrowband infared pump pulse, a subsequent UV-Vis pump pulse and a broadband infrared probe pulse. The UV-Vis pump pulse is off-resonant with regard to the UV-Vis absorption band. Electronic excitation becomes only possible, if the infrared pump pulse modulates the UV-Vis transition of the IR-excited molecule. This modulation brings the UV-Vis transition in resonance with the UV-Vis pump pulse. Thereby, only the molecules which were pre-excited with the infrared pulse can be excited into the electronically excited state. A computational prediction of the modulation was carried out by Jan von Cosel in the Burghardt group.
The narrowband infrared pump pulse can be used to selectively excite a subensemble of molecules in a mixture into an electronically excited state even if the UV-Vis spectra of all molecules are virtually identical. For this the sub-ensemble needs to exhibit an identifiable infrared spectrum. Combined with the introduction of isotope labels, which lead to changes in the infrared absorption spectra, the larger selectivity in the infrared region can be exploited for an alternative selective uncaging approach. In VIPER uncaging the infrared pump pulse selects the species and the subsequent UV-Vis pulse provides the energy needed for electronic excitation upon which the photo cleavage can occur.
After an introduction of the principle idea of uncaging and VIPER spectroscopy, the concept of VIPER uncaging is introduced and its limits and requirements are discussed. Some examples for possible VIPER cages are reviewed.
A coumarin molecule (7-diethylamino coumarin) which can release an azide group was chosen as a first test molecule for VIPER uncaging. Its isotopomers were characterized to determine suitable spectroscopic markers for successful uncaging and to find fitting experimental conditions. The chosen coumarin cage has an UV-Vis absorption band at approximately 380 nm and a steep flank on the high wavelength side of the band. The quantum yield for the azide compound is between 10-20 % depending on the solvent’s water content. The release was found to be on a picosecond timescale which is among the fastest known photo reactions, but the photo reaction mechanism has proven to be not straightforward. For the VIPER experiment on the mixture two isotopomers were chosen with a 13C atom at different positions. In one species a ring mode of the coumarin is changed by the 13C atom. In the other isotopomer the carbonyl stretching mode is influenced. The change in the ring mode region allows to select one species or the other with the infrared pre-excitation. Because of experimental difficulties only isotopomers with the same leaving group could be used. The successful selective electronic excitation of the individual isotopomers in a mixture was monitored by probing the carbonyl region.
As a second VIPER cage, para-hydroxyphenacyl (pHP) was chosen. A thiocyanate group was selected as leaving group. pHP cages have their electronic transition in the UV, with a maximum absorption at 290 nm. The shape of the spectrum is suitable and the quantum yield is very high, with values in the literature of up to 90 %. Also the photo reaction is well studied and the expected byproducts are well characterized. The chosen isotopologues were characterized spectroscopically. The resulting data on the photo reaction were in agreement with the mechanism proposed in the literature. The mixture for the VIPER experiment consisted of two isotopologues, where for one species all the C atoms in the ring were labelled and for the other the C-atom in the thiocyanate leaving group was labelled. Here the release of the different leaving groups, labelled and unlabelled thiocyanate, could be monitored selectively. This shows that it is possible to selectively release a molecule in a mixture of caged molecules by applying the VIPER pulse sequence.
The samples were synthesized by Matiss Reinfelds from the Heckel group and the VIPER experiments were done together with Carsten Neumann and with support
of the Bredenbeck group.
The leaving groups were chosen because of their infrared absorption which allowed to directly monitor the successful cleavage by spectroscopy. This was needed for the proof-of-concept experiment and to allow direct optimization of the experimental parameters but is not necessarily a requirement for VIPER uncaging.
Concerning the selectivity of the VIPER uncaging, the approach is at the moment mainly limited by the infrared pulse energy. The selective VIPER excitation is competing with unselective excitation directly by just the UV-Vis pulse. A more intense infrared pump pulse would increase only the selective VIPER excitation and thereby improve the contrast to the unspecific background.
To address this issue, the first steps towards an alternative infrared light generation are undertaken. In this alternative approach the infrared light for preexcitation is directly generated by difference frequency generation of the laser output, i.e. the high energy 800 nm fundamental, and the output of a non-collinear optical parametric amplifier (NOPA). To achieve a narrowband pump pulse the pulses are chirped before mixing. In the scope of this thesis a NOPA has been installed and the mixing has been tested with available test crystal medium. While infrared wavelength region and power were not in the aspired range with this alternative crystal the feasibility of mixing between a NOPA output and the fundamental could be shown.
Other possibilities to increase the contrast to the unspecific background excitation by the UV-Vis pump pulse are discussed. For most applications of selective VIPER uncaging the detection by fs-laser spectroscopy will not be needed and could be replaced by other methods e.g. chromatography. This will allow the experimental parameters of the VIPER pulse sequence to be changed in a way which reduces unspecific excitation i.e. reducing the UV-Vis-pump energy and result in much better contrast.
In conclusion, the experimental data in this thesis shows the VIPER pulse sequence to be applicable to selective uncaging schemes and indicates measures to arrive at the specificity necessary for uncaging applications. This thesis was focused on uncaging photo reactions with isotopomers and isotopologues, but other types of photo reactions could in principle be controlled in the same way. It should be possible to address different isomers in mixtures or different ground states of proteins selectively. The discussed experiments are a significant step towards control over photo reactions in mixtures.
Cortical circuits exhibit highly dynamic and complex neural activity. Intriguingly, cortical activity exhibits consistently two key features across observed species and brain areas. First, individual neurons tend to be co-active in spatially localized domains forming orderly arranged, modular layouts with a typical spatial scale. Second, cortical elements are correlated in their activity over large distances reflecting long-range network interactions distributed over several millimeters. Currently, it is unclear how these two fundamental properties emerge in the early developing cortical activity.
Here, I aim to fill this gap by combining analyses of chronic imaging data and network models of developing cortical activity. Neural recordings of spontaneous and visually evoked activity in primary visual cortex of ferrets during their early cortical development were obtained using in vivo 2-photon and widefield epi-fluorescence calcium imaging. Spontaneous activity was used to probe the early state of cortical networks as its spatiotemporal organization is independent of a stimulus-imposed structure, and it is already present early in cortical development prior to reliably evoked responses. To assess the mature functional organization of distributed networks in cortex, the tuning of neural responses to stimulus features, in particular to the orientation of an edge-like stimulus, was assessed. Cortical responses to moving gratings of varying orientations form an orderly arranged layout of orientation domains extending over several millimeters.
To begin with, I showed that spontaneous activity correlations extend over several millimeters, supporting the assumption of using spontaneous activity to assess distributed networks in cortex.
Next, I asked how distributed networks in the mature visual cortex - assessed by spontaneous activity correlations - are related to its fine-scale functional organization. I found that the spatially extended and modular spontaneous correlation patterns accurately predict the fine spatial structure of visually evoked orientation domains several millimeters away. These results suggest a close relation between spontaneous correlations and visually evoked responses on a fine spatial scale and across large spatial distances.
As the principles governing the functional organization and development of distributed network interactions in the neocortex remain poorly understood, I next asked how long range correlated activity arises early in development. I found that key features of mature spontaneous activity introduced in this work, including long-range spontaneous correlations, were present already early in cortical development prior to the maturation of long-range, horizontal connections, and the predicted mature orientation preference layout. Even after silencing feed-forward input drive by inactivating retina or thalamus, long-range correlated and modular activity robustly emerged in early cortex. These results suggest that local recurrent connections in early cortical circuits can generate structured long-range network correlations that guide the formation of visually-evoked distributed functional networks.
To investigate how these large-scale cortical networks emerge prior to the maturation and elaboration of long-range horizontal connectivity, I examined a statistical network model describing an ensemble of spatially extended spontaneous activity patterns. I found a direct relationship between the dimensionality of this ensemble of activity patterns and the decay of its correlation structure. Specifically, reducing the dimensionality of the ensemble leads to an increase in the spatial range of the correlation structure.
To test whether this mechanism could generate a long-range correlation structure in cortical circuits, I studied a dynamical network model implementing a dimensionality reduction mechanism. Based on previous work demonstrating that network heterogeneity reduces the dimensionality of activity patterns, I showed that by increasing the degree of heterogeneity in the network, the dimensionality of the ensemble of activity patterns decreases and in turn their correlations extend over a greater range. A comparison to experimental data revealed a quantitative match between the network model and the observations in vivo in several of the key features of the early cortex including the spatial scale of correlations. Low dimensionality of spontaneous activity thus might provide an organizational principle explaining the observed long-range correlation structure in the early cortex.
Finally, I asked whether a network with a biologically plausible architecture can generate modular activity. Several classical models showed that modular activity patterns can emerge via an intracortical mechanism involving lateral inhibition. However, this assumption appears to be in conflict with current experimental evidence. Moreover, these network models were not experimentally tested, so far. Here, I showed by using linear stability analysis that spatially localized self-inhibition relaxes the constraints on the connectivity structure in a network model, such that biologically more plausible network motifs with shorter ranging inhibition than excitation can robustly generate modular activity.
Importantly, I also provided several model predictions to make the class of network models experimentally testable in view of recent technological advancements in imaging and manipulation of cortical circuits. A critical prediction of the model is the decrease in spacing of active domains when the total amount of inhibition increases. These results provide a novel mechanism of how cortical circuits with short-range inhibition can form modular activity.
Taken together, this thesis provides evidence that the two described fundamental features of neural activity are already present in the early cortex and shows that activity with those features can be generated in network models with an architecture consistent with the early cortex using basic principles.
In this work a nonlinear evolution of pure states of a finite dimensional quantum system is introduced, in particular a Riccati evolution equation.
It is shown how this class of dynamics is actually a Hamiltonian dynamics in the complex projective space.
In this projective space it is shown that there is a nonlinear superposition rule, consistent with its linear counterpart in the Hilbert space. As an example, the developed nonlinear formalism is applied to the semiclassical Jaynes–Cummings model.
Later, it is shown that there is an inherent nonlinear evolution in the dynamics of the so-called generalized coherent states.
To show this, the fact that in quantum mechanics it is possible to immerse a ''classical'' manifold into the Hilbert space is employed, such that one may parametrize the time-dependence of the wave function through the variation of parameters in the classical manifold.
The immersion allows to consider the so-called principle of analogy, i.e. using the procedures and structures available from the classical setting to employ them in the quantum setting.
Finally, it is introduced the contact Hamiltonian mechanics, an extension of symplectic Hamiltonian mechanics, and it is showed that it is a natural candidate for a geometric description of non-dissipative and dissipative systems.
The last decades have brought tremendous progress in understanding the phase structure of the strongly interacting matter. This has been driven by studying heavy-ion collisions on the experimental side and Lattice QCD, functional approaches to QCD, perturbation theory and effective theories on the theoretical side. Of particular interest is the transition from hadrons to partonic degrees of freedom which is expected to occur at high temperatures or high baryon densities. These phases play an important role in the early universe and the core of neutron stars. Nowadays, the existence of a deconfined phase, i.e. Quark Gluon Plasma (QGP) and its phase transition at vanishing and small net-baryon densities, are well established. However, the situation at larger densities is less clear.
Complementary to the studies of matter at high temperatures and low net-baryon densities performed at RHIC and LHC, the proposed Compressed Baryonic Matter (CBM) experiment at the future FAIR facility, aims to explore the QCD phase diagram at very high baryon-net densities and moderate temperatures. The CBM research program includes the search for the deconfinement phase transition, the study of chiral symmetry restoration in super dense baryonic matter, the search for the critical endpoint, and the study of the nuclear equation of state at high densities. While other experiments (STAR-BES at BNL, BM@N at NICA) are suited to measure bulk observables, CBM is explicitly designed to access rare observables, such as multi-strange hadrons, dileptons, hypernuclei and charmonium. Therefore, a key feature of CBM is the very high interaction rate, exceeding those of contemporary and proposed nuclear collision experiments by several orders of magnitude. However, some of the rare probes have a complex signature, hidden in a background of several hundreds of charged tracks. This forbids a conventional, hardware-triggered readout; instead, the experiment combines self-triggered front-end electronics, fast and free-streaming data transport, online event reconstruction and online event selection.
The central detector for tracking and momentum determination of charged particles in the CBM experiment is the Silicon Tracking System (STS). It is designed to measure up to 700 charged particles in nucleus-nucleus collisions between 0.1 and 10 MHz interaction rate, to achieve a momentum resolution in 1 Tm dipole magnetic field better than 2%, and to be capable of identifying complex particle decays topologies, e.g., such with strangeness content. The STS comprises 8 tracking stations equipped with double-sided silicon microstrip sensors. Two million channels are read out with self-triggering electronics, matching the data streaming and on-line event analysis concept applied throughout the experiment. The detector’s functional building block consists of a silicon sensor, aluminum-kapton microcables and two front-end electronics boards integrated in a module. The custom-designed ASIC (STS-XYTER) implements the analog front-end, the digitizer and the generation of individual hit data for each signal.
Design of the front-end chip requires finding an optimal solution for time and input charge measurements with tight constraints: small area (58 μm channel pitch), low noise levels (below 1500 ENC(e− )), low power consumption (610 mW/channel), radiation hard architecture and speed requirements. Being a part of the first processing stage in the full readout and data acquisition chain, the characterization of the chip and its integration with the detector components is a crucial task. In this work, various methods and tools are established for testing and qualifying the ASIC analog front-end. A procedure for amplitude and timing calibration is developed using different functionalities of the chip. The procedure is optimized for our prototype system in order to achieve the best accuracy in the shortest amount of time. Results were verified using a gamma source and an external pulse generator, showing discrepancies below 5%.
Among the multiple operation requirements of the ASIC, the noise performance is of essential importance. The characterization of the chip noise is carried out as a function of a large number of parameters such as: low-voltage power regulators, input capacitance, shaping time, temperature and bond’s protective glue (glob-top). These studies allowed to optimize the ASIC configuration settings, to identify possible malfunctions in the low voltage powering scheme and to select possible glob-top materials to be used in the module assembly. Moreover, important differences are found among odd and even channels, which main cause was related to the bias scheme of the amplifiers of the two groups of channels. This effect has been corrected in the new version (v2.1) of the ASIC.
Despite the STS front-end electronics being located outside of the physics acceptance, they will be exposed to high fluxes of charged particles. Considering the SIS100 possible running scenario, the lifetime dose at the location of the electronics is expected not to exceed 800 krad. Consequently, the STS-XYTERv2 ASIC implements a radiation hard design based on dual-interlocked cells (DICE), and triple modular redundancy (TMR).
Multiple dedicated beam campaigns were carried out to evaluate the ASIC’s design in terms of immunity to single event upsets (SEU) errors and overall performance after a lifetime doses. The DICE cell SEU cross section was measured in a high-intensity proton beam. Result show a significant improvement of the SEU immunity in the STS-XYTERv2 compared to its predecessor, and allows to estimate the upset rate in the CBM running scenario, resulting in less than one SEU/ASIC/day.
The studies on the total ionizing dose (TID) show that the overall noise levels for the ASIC, at the end of the experiment lifetime, are expected to increase by approximately 40 – 60%. Moreover, they demonstrated that short periods of annealing at room temperature can favorably influence the noise performance of the chip.
The assembly and test of the STS modules, a complex process with multiple stages and a long learning curve, is illustrated in different parts of this work. The first prototype modules were built with the front-end board type B (FEBs-B), capable of reading out 128 channels for p and n side respectively. The studies were conducted with a relativistic proton beam of 1.7 GeV/c momentum at the COSY accelerator facility, Research Center Juelich, in March 2018. The campaign brought valuable insights to the development of an effective grounding and powering scheme for reading out the detectors. The signal-to-noise was measured for one of the prototype modules, resulting in values larger than 15 for both polarities. A deeper analysis into the collected data allowed the identification of a logic error in the ASIC that affected the readout rate and the quality of the data. This issue was corrected in the new version of the chip.
A precursor of the STS detector, named mini-STS (mSTS), has been built within the mCBM project carried out in FAIR Phase0. mSTS was built from 4 fully assembled detector modules. To ensure the proper operation of the ASICs that were used in the module assembly, it was required to develop a rigorous quality assurance procedure. A dedicated setup was built based on a custom designed pogo-pin station and a total of 339 chips were tested. More than 90% of good-quality and operational ASICs were obtained. In the mCBM beam campaign of March 2019, four detector modules were successfully operated in a close-to-final readout chain and valuable data were collected. The mSTS detector was exposed to the products of Ag+Au collisions at energies above 1.58 AGeV and overall interaction rates up to 106 , which resembles the real conditions of the CBM experiment.
Along this work, significant progress for the development of the STS detector modules was achieved. Techniques for characterization of the front-end electronics and the complete detector system were developed and worked out. They will be applied for QA of the components during the series production.
As its fundamental function, the brain processes and transmits information using populations of interconnected nerve cells alias neurons. The communication between these neurons occurs via discrete electric impulses called spikes. A core challenge in neuroscience has been to quantify how much information about relevant stimuli or signals a neuron transports in its spike sequences, or spike trains. The recently introduced correlation method allows to determine this so-called mutual information in terms of a neuron’s temporal spike correlations under certain stationarity assumptions. Based on the correlation method, I address several open questions regarding neural information encoding in the cortex.
In the first part (chapter 2), I investigate the role of temporal spike correlations for neural information transmission. Temporal correlations in neuronal spike trains diminish independence in the information that is transmitted by the different spikes and hence introduce redundancy to stimulus encoding. However, exact methods to describe how such spike correlations impact information transmission quantitatively have been lacking. Here, I provide a general measure for the information carried by spike trains of neurons with correlated rate modulations only, neglecting other spike correlations, and use it to investigate the effect of rate correlations on encoding redundancy. I derive it analytically by calculating the mutual information between a time correlated, rate-modulating signal and the resulting spikes of Poisson neurons. Whereas this information is determined by spike autocorrelations only, the redundancy in information encoding due to rate correlations depends on both the distribution and the autocorrelation of the rate histogram. I further demonstrate that, at very small signal strengths, the information carried by rate correlated spikes becomes identical to that of independent spikes, in effect measuring the rate modulation depth. In contrast, a vanishing signal correlation time maximizes information transmission but does not generally yield the information of independent spikes.
In the second part (chapter 3), I analyze the information transmission capabilities of two particular schemes of encoding stimuli in the synaptic inputs using integrate-and-fire neuron models. Specifically, I calculate the exact information contained in spike trains about signals which modulate either the mean or the variance of the somatic currents in neurons, as is observed experimentally. I show that the information content about mean modulating signals is generally substantially larger than about variance modulating signals for biological parameters. This result provides evidence, by means of exact calculations of the mutual information, against the potential benefit of variance encoding that had been suggested previously.
Another analysis reveals that higher information transmission is generally associated with a larger proportion of nonlinear signal encoding. Moreover, I show that a combination of signal-dependent mean and variance modulations of the input current can synergistically benefit information transmission through a nonlinear coupling of both channels. On a more general level, I identify what was previously considered an upper bound as the exact, full mutual information. Furthermore, by analyzing the statistics of the spike train Fourier coefficients, I identify the means of the Fourier coefficients as information-carrying features.
Overall, this work contributes answers to central questions of theoretical neuroscience concerning the neural code and neural information transmission. It sheds light on the role of signal-induced temporal correlations for neural coding by providing insight into how signal features shape redundancy and by establishing mathematical links between existing methods and providing new insights into the spike train statistics in stationary situations. Moreover, I determine what fraction of the mutual information is linearly decodable for two specific signal encoding schemes.
Der Urknall vor ungefähr 13.8 Milliarden Jahren markiert die Entstehung des Universums. Die gesamte Energie und Materie war in einem Punkt konzentriert und expandiert seitdem kontinuierlich. Wenige Sekundenbruchteile nach dem Urknall war die Temperatur und Dichte dieser Materie extrem hoch und die erschaffenen Elementarteilchen, speziell Quarks und Gluonen, durchliefen einen Zustand den man als Quark-Gluon-Plasma (QGP) bezeichnet und innerhalb dessen die starke Wechselwirkung dominiert. Innerhalb dieses Plasmas können Quarks und Gluonen, welche sonst in Hadronen gebunden sind, sich frei bewegen. Die direkte Beobachtung des frühzeitlichen QGPs ist mit heutigen Mitteln nicht möglich. Allerdings ist es möglich die Dynamik und Kinematik innerhalb eines künstlich erzeugten QGPs zu erforschen und damit Rückschlüsse auf die Vorgänge während des Urknalls zu machen.
Um künstliche QGPs unter kontrollierten Bedingungen zu erzeugen, werden heutzutage ultrarelativistische Schwerionen zur Kollision gebracht. Der stärkste je gebaute Schwerionenbeschleuniger LHC befindet sich am Kernforschungzentrum CERN in der Nähe von Genf. Das ALICE Experiment, als eines der vier großen Experimente am LHC, wurde speziell gebaut um das QGP näher zu untersuchen. Vollständig ionisierte Bleikerne werden mit nahezu Lichtgeschwindigkeit in den Experimenten zur Kollision gebracht. Die deponierte Energie lässt die Temperatur der Quarks und Gluonen innerhalb der kollidierenden Nukleonen ansteigen bis eine kritische Temperatur überschritten wird und ein Phasenübergang in das QGP erfolgt. Im Laufe der Kollision kühlt das Medium ab und gelangt unter die kritische Temperatur. Nun werden aus den ehemals freien Quarks Hadronen gebildet. Diese Hadronen oder Zerfallsprodukte dieser Hadronen können daraufhin in die Detektoren des Experiments fliegen und werden dann dort gemessen.
Es gibt mehrere mögliche Observablen des QGP, die messbar mit dem ALICE Experiment sind. Die Observablen, die in dieser Arbeit detailliert untersucht werden, sind die invariante Masse und der Paartransversalimpuls eines Dielektrons. Ein Dielektron besteht aus einem Elektron und einem Positron, welche miteinander korreliert sind. Dielektronen sind ideale Sonden zur Vermessung des QGPs. Sie werden durch verschiedene Prozesse während allen Kollisionsphasen produziert, wie beispielsweise bei den initialen, harten Stößen der kollidierenden Nukleonen oder durch den elektromagnetischen Zerfall verschiedener Hadronen wie π0 und J/ψ. Zusätzlich strahlt das QGP Dielektronen abhängig von seiner Temperatur ab. Theoretisch erlaubt dies die direkte Temperaturmessung des QGPs. Ein weiterer Vorteil der Dielektronenmessung gegenüber der Messung von Hadronen liegt darin, dass Elektronen und Positronen keine Farbladungen tragen und somit auch nicht mit der dominierenden starken Wechselwirkung innerhalb des QGPs interagieren und somit unbeeinflusst Informationen über seine Dynamik liefern können.
In dieser vorliegenden Arbeit werden Dielektronenspektren als Funktion der invarianten Masse und des Paartransversalimpulses in Blei-Blei-Kollisionen mit einer Schwerpunktsenergie von √sNN = 5.02 TeV gemessen. Das erste Mal in Schwerionenkollisionen konnte an einem der großen LHC Experimente der minimale Transversalimpuls der gemessenen Elektronen und Positronen auf peT > 0.2 GeV/c minimiert werden. Dies gibt im Vergleich zu der publizierten Messung mit peT > 0.4 GeV/c die Möglichkeit auch sogenannte weiche Prozesse zu messen, erhöht aber auch den Komplexit ätsgrad der Messung durch massiv gesteigerten Untergrund. Zusätzlich ist die Messung zentralitäsabhängig durchgeführt. Zentralität ist ein Maß für den Abstand der beiden Bleikerne zum Zeitpunkt der Kollision. Je zentraler eine Kollision, desto größer ist die deponierte Energie und desto größer und heißer ist das erzeugte QGP und die daraus resultierenden Effekte.
Die gemessenen Dielektronenverteilungen werden mit dem erwarteten Beiträgen aus hadronischen Zerfällen verglichen. Die Messung ergibt, dass der Beitrag aus semileptonischen Zerfällen von Charmquarks gemessen im Vakuum, welcher mit der Anzahl der binären Nukleon-Nukleon-Kollisionen in Blei-Blei-Ereignissen hochskaliert ist, nicht das Dielektronenspektrum beschreibt. Eine Modifizierung des Beitrag gemäß des unabhängig gemessenen nuklearen Modifikationsfaktors für einzelne Elektronen aus Charm- und Beautyquarks verbessert die Beschreibung des Dielektronenspektrums. Zusätzlich wurde der Beitrag virtueller direkter Photonen abgeschätzt. Die gemessenen Werte sind vergleichbar mit vorangegangenen Messungen bei einer niedrigeren Schwerpunktsenergie. Ebenso ist es möglich in periphären Kollisionen einen Beitrag durch eine Quelle zu vermessen, die Dielektronen bei niedrigem Transversalimpuls pT,ee < 0.15 GeV/c aussendet.
In this thesis, we presented the theoretical description of the magnetic properties of various frustrated spin systems. Especially in search of exotic states, such as quantum spin liquids, magnetically frustrated systems have been subject of intense research within the last four decades. Relating experimental observations in real materials with theoretical models that capture those exotic magnetic phenomena has been one of the great challenges within the field of magnetism in condensed matter.
In order to build such a bridge between experimental observations and theoretical models, we followed two complementary strategies in this thesis. One strategy was based on first principles methods that enable the theoretical prediction of electronic properties of real materials without further experimental input than the crystal structure. Based on these predictions, low-energy models that describe magnetic interactions can be extracted and, through further theoretical modelling, can be compared to experimental observations. The second strategy was to establish low-energy models through comparison of data from experiments, such as inelastic neutron scattering intensities, with calculated predictions based on a variety of plausible magnetic models guided by microscopic insights. Both approaches allow to relate theoretical magnetic models with real materials and may provide guidance for the design of new frustrated materials or the investigation of promising models related to exotic magnetic states.
The diffusive behavior of macromolecules in solution is a key factor in the kinetics of macromolecular binding and assembly, and in the theoretical description of many experiments. Experiments on high-density protein solutions have found that a slow down of the diffusion dynamics is larger than expected from colloidal theory for non-interaction hard-spheres. It has also been shown that the rotational diffusion anisotropy in high-density protein solutions is larger than in dilute ones. High-density protein solutions are a complex fluid that is different from the neat fluid assumption used in the hydrodynamic theory. It is therefore important to have methods to accurately calculate the translational and rotational diffusion tensor from simulations as well as simulation algorithms to explore high-density solutions.
Simulations provide a powerful tool to study diffusion in complex fluids. They can be used to study the macroscopic and microscopic effects of complex fluids on the diffusive behavior. There has been already a lot of work done to accurately simulate diffusion and to determine the diffusion coefficients from simulations.
The translational diffusion of molecules in simple and complex liquids can be determined with high accuracy from simulations. This is not yet the case for rotational diffusion. Existing algorithms to calculate the rotational diffusion coefficients from simulations make assumptions about the shape of the protein or only work at short times. For the simulation of diffusive behavior of macromolecules two options exist today. An all-atom integrator with explicit solvent molecules or coarse-grained (CG) simulations with an implicit solvent. CG simulations of dynamic behavior with implicit solvent are also called Brownian dynamics (BD) simulations. For the CG simulations the Ermak-McCammon algorithm is often used to solve the underlying Langevin equation. The algorithm is an extension of the Euler-Maruyama integrator to include translation and rotation in three dimensions. This algorithm only correctly reproduces the equilibrium probability for short time-steps and the error depends linearly on the time-step. It has been shown that Monte Carlo based algorithms can produce BD for translational dynamics, when appropriately parametrized. The advantage of Monte Carlo based algorithm is that they will reproduce the correct equilibrium distribution independent of the chosen time-step. This in return allows choosing larger time-steps in simulations. The aim of this thesis is to develop novel´methods to accurately determine the rotational diffusion coefficient from simulations and extend existing Monte Carlo algorithms to include rotational dynamics.
The first project addresses the question of how to accurately determine the rotational diffusion coefficients from simulations. We develop a quaternion based method to calculate the rotational diffusion tensor from simulations and a theory for the effects of periodic boundary conditions (PBC) on the rotational diffusion coefficient in simulations.
Our method for calculating rotational diffusion coefficients is based on the quaternion covariances from Favro for a freely rotating rigid molecule. The covariances as formulated by Favro are only valid in the principal coordinate system (PCS) of the rotation diffusion tensor. The covariances can be generalized for an arbitrary reference coordinate system (RCS), i.e., a simulation, given the principle axes of the rotational diffusion tensor in the RCS. We show that no prior knowledge of the diffusion tensor and its principal axes is required to calculate the generalized covariances from simulations using common root-mean-square distance (RMSD) procedures. We develop two methods to fit the covariances calculated from simulations to our generalized equations to fit the rotational diffusion tensor. In the first method we minimize the sum of the squared error deviations between model and simulation data. For this six dimensional optimization we use a simulated annealing algorithm. Alternatively the rotational diffusion tensor can also be determined from a eigenvalue decomposition of covariance after integration. To minimize the effects of sampling noise in the integration we first apply a Laplace-transformation to smooth the covariances at large times. For ideal sampling the resulting rotational diffusion coefficient should be independent of the value of the Laplace variable. In practice, however, the best results are achieved using a value close to the inverse autocorrelation time of the rotational motion.
...
In this work we provided additional insights into our understanding of bulk QCD matter through the study of the transport coeffcients which govern the non-equilibrium microscopical processes of statistical ensembles. Specically, we focused on the low energy regime corresponding to the hadron gas, as the properties of this region of the phase diagram are still relatively unknown, and existing calculations for the transport coeffcients are either scarce, contradictory, or somewhat limited in scope; this thesis' main goal was thus to shed some light on this by providing new independent calculations of these quantities.
We subsequently presented two formalisms which can be used to calculate transport coeffcients. The first one (which also was the main tool we used in the following chapters to produce our results) relies on the development of so-called Green-Kubo formulas, which relate non-equilibrium dissipative fluctuations with transport coeffcients; notably, the off-diagonal components of the energy-momentum tensor are shown to be related to the shear viscosity, its diagonal components to the bulk viscosity and fluctuations in the electric current can be related to the electric conductivity. We additionally introduced two new conductivities, namely the baryon-electric and strange electric conductivities, which we dubbed, together with the already known electric one, the "cross-conductivity", which encodes information about how electric fluctuations are correlated to changes in electric, baryonic or strange currents, or vice-versa. The second way of calculating transport coeffcient which we discussed consists in linearizing the collision term of the Boltzmann equation through the Chapman-Enskog formalism. While in principle providing direct semi-analytical results for the transport coeffcients, this approach is complicated to implement when more than a few species are considered, and as such was then mostly used as a tool to calibrate our Green-Kubo calculations.
The hadron gas model that we used for all calculations, namely the transport approach SMASH, was then presented. The main features of the model were explained, such as the collision criterion, the considered degrees of freedom and the specific way in which they microscopically interact with each other. It was verified that SMASH does reproduce analytical results of the Boltzmann equation in an expanding universe scenario, thus showing the equivalence of this transport approach and the associated kinetic theory results. A special care was taken to detail the ways in which a state of thermal and chemical equilibrium (which is necessary for Green-Kubo relations to be valid) can be reached and described using SMASH.
...
Ziel der vorliegenden Arbeit war die Optimierung der Kristallzüchtung von eisenbasierten Supraleitern. Im ersten Teil lag der Fokus dabei auf der Züchtung der 1111-Verbindung unter Hochdruck/Hochtemperaturbedingungen (HD/HT), sowie der systematischen Untersuchung verschiedener Einflüsse der Züchtung dieser Familie unter Normaldruckbedingungen.
Die HD/HT-Experimente führten unter den gewählten Parametern, sowohl unter der Verwendung eines Flussmittels als auch ohne, nicht zur Stabilisierung der gewünschten Zielphase. Stattdessen kam es zur Phasenseparation So bildete sich immer im Inneren des verwendeten BN-Tiegels ein, häufig kugelförmig ausgeformtes, Gebilde, bestehend aus einer Fe-As-Phase. Dies gilt sowohl für NdFeAsO als auch LaFeAsO1-xFx. Bei der Verwendung von Salz als Flussmittel kam es neben dieser Fe-As-Phase auch häufig zur Bildung einer Cl-haltigen Phase. Auch zeigte sich, dass es zu einer B-Diffusion während des Versuches kam, sodass Selten-Erd-Oxoborate nachgewiesen werden konnten. Durch einen Versuch unter Normaldruckbedingungen zeigte sich, dass dies kein Problem in der Hochdrucksynthese ist, sondern ein grundlegendes Problem bei der Verwendung von BN mit den Selten-Erden ist.
Nachdem gezeigt wurde, dass eine systematische Untersuchung bzw. Optimierung der Züchtungsparameter der 1111-Verbindungen unter HD/HT-Bedingungen enorm schwierig ist, lag der weitere Fokus auf der Züchtung unter Normaldruckbedingungen. Dazu wurde zu Beginn gezeigt, dass die Verwendung von Quarzampullen bei Temperaturen bis zu 1200 °C nicht zu einer zusätzlichen Sauerstoffdiffusion führen. Dies ermöglichte es ohne zusätzliche Schweißarbeit oder hohen Kosten den Optimierungsprozess für ein geeignetes Temperatur-Zeit-Profil durchzuführen. Das so erhaltene Profil wurde anschließen für alle weiteren Versuche verwendet. Mit dieser Basis wurde daraufhin untersucht, welchen Einfluss die Menge an Flussmittel auf die Stabilisierung der Phase und demnach auf die Kristallzüchtung hat. Dabei zeigte sich, dass ein molares Material-zu-Flussmittel-Verhältnis von 1:7 die besten Resultate liefert. Der nächste Optimierungsschritt, die Frage nach einem geeigneten Sauerstoffspender, in Angriff genommen. Bei dieser Frage wurde sich auf einen Sauerstoffspender aus der Gruppe der Eisenoxide konzentriert. Es zeigte sich, dass, für das gewählte Temperatur-Zeit-Profil die Verbindung FeO und Fe3O4 die besten Resultate liefern. In diesen Versuchen ist es gelungen Kristalle zu züchten die Kantenlängen bis zu 800 μm aufweisen. Allerdings zeigten Vergleichsversuche mit einen anderen Temperatur-Zeit-Profil, dass Fe2O3 in diesen Fällen die besten Resultate liefern. Dies macht deutlich, dass es bisher keine vollständige Kontrolle in der Züchtung der 1111-Verbindung gibt. Die Veränderung eines Züchtungsparameters bedeutet, dass auch alle anderen Parameter erneut geprüft werden müssen. Somit zeigte sich, dass eine fundierte und systematische Untersuchung der Züchtungsparameter notwendig ist.
Nachdem die grundlegenden Fragen für die undotierte Verbindung NdFeAsO beantwortet wurden, wurde untersucht, welche Sauerstoff-Fluorspenderkombination bei gegebenem Temperatur-Zeit-Profil optimal für den Kristallwachstum und den Fluoreinbau ist. Die erhaltenen Resultate belegten, dass in diesem Fall Fe3O4 und FeF2 zu den besten Resultaten führte. Die so gezüchteten Kristalle wiesen Kantenlängen bis zu 800 μm auf und Messungen des elektrischen Widerstandes zeigten einen maximalen Tc ≈ 53 K mit einen RRR-Wert im magnetischem Bereich von über 10. Damit unterscheiden sich die gezüchtete Kristalle hinsichtlich ihrer Qualität um den Faktor ~3 von den bisherigen Einkristallen bekannt aus der Literatur.
Durch die Ermittlung des reellen Fluorgehalts der Proben mittels WDX in Kombination mit elektrischen Widerstandsmessungen wurde ein vorläufiges Phasendiagramm erstellt.
Magnetische Messungen unter Normaldruck und Hochdruckbedingungen ermöglichten es die Anisotropie zwischen der ab- Ebene und der c-Ebene zu messen, sowie das Verhalten des elektrischen Widerstandes in Abhängigkeit vom Druck.
Es zeigte sich dabei, dass ab einem Druck von etwa 22.9 GPa die Supraleitung in diesen Kristallen nicht mehr vorhanden ist, und der Kristall wieder normalleitend ist. Mit weiter steigendem Druck steigen die Absolut-Widerstandswerte ebenfalls wieder an, was auf eine mögliche ferromagnetische Ordnung deutet.
Im zweiten Teil der Arbeit lag der Fokus auf einer Verbindung aus der 122-Familie der Pniktide: SrFe2As2. Zu Beginn wurde untersucht, welches der drei gewählten Tiegelmaterialien BN, Al2O3 oder Glaskohlenstoff, für Züchtungen dieser Phase am geeigneten ist. In allen Versuchen konnte die gewünschte Zielphase stabilisiert werden, jedoch kam es bei der Verwendung von Glaskohlenstoff zu Diffusion von Kohlenstoff aus dem Tiegel in die Probe hinein, sodass C-haltige Phasen nachweisbar waren. Ebenso zeigte sich, dass es auch eine Diffusion vom Material in den Tiegel hinein gegeben hat. Diese Probleme traten auch bei der Verwendung von Al2O3 auf. Durch ein Röntgenpulverdiffrakgtogramm konnte eine Al-haltige Verbindung in der Probe nachgewiesen werden. Ein weiterer Nachteil dieses Materials ist die Benutzung des Tiegels durch die Schmelze. Von den drei Materialien erwies sich BN als am besten geeignetes Tiegelmaterial. Es kommt zu keiner Benetzung oder Diffusion, auch der Fremdphasenanteil ist sehr gering in dieser Probe.
Mit diesem Wissen wurde im weiteren Verlauf ein quasi-binäres Phasendiagramm des Systems SrFe2As2-FeAs erstellt. Die intermetallische Verbindung FeAs fungiert hierbei als Flussmittel. Eine wichtige Frage in diesem Zusammenhand ist die Frage ob das System kongruent erstarrend ist. Diese Frage lässt sich anhand der vorhandenen DTA-kurven nicht eindeutig beantworten, zeigte das System bei Aufheizen keine zusätzlichen Schmelzprozesse, es scheint allerdings, dass es in der Schmelze zu einem Abdampfen von Arsen kommt. Somit verschiebt sich die Zusammensetzung der Schmelze und beim Abkühlen treten zusätzliche Erstarrungsprozesse auf. Die Schmelztemperatur TM wurde so auf T = 1320 °C bestimmt. Mit steigendem Flussmittelanteil verschob sich diese Temperatur zu niedrigeren Temperaturen unter 1200 °C, was eine Züchtung in Quarzampullen wieder möglich macht.
Die Ergebnisse in dieser Arbeit liefern eine fundierte Grundlage für weitere Optimierungen. So ist zum Beispiel der Frage nach dem am besten geeigneten Sauerstoffspender nicht auf die Selten-Erd-Oxide eingegangen worden. Auch ob die Verwendung eines anderen Salzes, wie zum Beispiel den Iodiden für die Züchtung bessere Resultate liefert kann weiterhin untersucht werden.
Nachdem der Schmelzpunkt von SrFe2As2 bestimmt wurde und im quasi-binärem Phasendiagramm ein Eutektikum vorhanden ist, kann mit den weiteren Optimierungsschritten für die Kristallzüchtung dieses Systems begonnen werden. Dazu gehört die Entwicklung eines Temperatur-Zeit-Profils, sowie im nächsten Schritt Züchtungen von dotierten Verbindungen.
Charge states and energy loss of heavy ions after passing an inductively coupled plasma target
(2019)
In various kinds of fields such as accelerator physics, warm dense matter, high energy density physics, and inertial confinement fusion, heavy ions beam-plasma interaction plays an important role, and abundant investigations have been and are being carried out. Taking advantage of a good level of understanding on the interaction between a swift heavy ions beam and a hydrogen gas discharge plasma, an engineering application of a spherical theta-pinch device as a plasma stripper for FAIR (facility for antiproton and ion research) and a scientific application of a swift heavy ions beam as a novel plasma diagnostic tool are proposed and investigated.
The spherical theta-pinch device is manufactured, improved, and comprehensively tested for its application as a plasma stripper. The device is mainly composed of an evacuated glass vessel that can be filled with gas (for example: hydrogen) and a LRC circuit including a capacitors bank and a set of coils. Discharging the device at an initial hydrogen pressure in the glass vessel and an operation voltage for the capacitors bank, a circuit current oscillates in the LRC circuit. The oscillating circuit current in the set of coils induces a corresponding alternating magnetic field inside the glass vessel to ignite and maintain a hydrogen plasma.
Based on the built setup of circuit and plasma diagnostics, the measurements of circuit current, plasma light emission, plasma shape, and hydrogen Balmer series are carried out. The recorded signals of the circuit current and the plasma light emission of many consecutively repetitive discharges overlap perfectly, which indicate a very good reproducibility of the parameters of the LRC circuit during discharge and the generated plasma. From the measured circuit current, a real energy transfer efficiency is calculated by our proposed new model, which shows its overall tendency varying with the hydrogen pressure and the operation voltage, including the maximum value of 25% occurring at an initial hydrogen pressure of around 25 Pa and a maximum operation voltage of 14 kV. So, the discharge at an initial hydrogen pressure of 20 Pa and an operation voltage of 14 ...
We study the Wigner function for massive spin-1/2 fermions in electromagnetic fields. The Wigner function is analytically solved in five cases when electromagnetic fields are constants. For a general space-time dependent field configuration, we use the method of semi-classical expansion and solved the Wigner function at linear order in the Planck's constant. At the same order, we obtained a generalized Boltzmann equation for particle distribution, and a generalized BMT equation for spin polarization. Using the Wigner function, we calculated some physical quantities in a thermal equilibrium system.
The present thesis is primarily concerned with the application of the functional renormalization group (FRG) to spin systems. In the first part, we study the critical regime close to the Berezinskii-Kosterlitz-Thouless (BKT) transition in several systems. Our starting point is the dual-vortex representation of the two-dimensional XY model, which is obtained by applying a dual transformation to the Villain model. In order to deal with the integer-valued field corresponding to the dual vortices, we apply the lattice FRG formalism developed by Machado and Dupuis [Phys. Rev. E 82, 041128 (2010)]. Using a Litim regulator in momentum space with the initial condition of isolated lattice sites, we then recover the Kosterlitz-Thouless renormalization group equations for the rescaled vortex fugacity and the dimensionless temperature. In addition to our previously published approach based on the vertex expansion [Phys. Rev. E 96, 042107 (2017)], we also present an alternative derivation within the derivative expansion. We then generalize our approach to the O(2) model and to the strongly anisotropic XXZ model, which enables us to show that weak amplitude fluctuations as well as weak out-of-plane fluctuations do not change the universal properties of the BKT transition.
In the second part of this thesis, we develop a new FRG approach to quantum spin systems. In contrast to previous works, our spin functional renormalization group (SFRG) does not rely on a mapping to bosonic or fermionic fields, but instead deals directly with the spin operators. Most importantly, we show that the generating functional of the irreducible vertices obeys an exact renormalization group equation, which resembles the Wetterich equation of a bosonic system. As a consequence, the non-trivial structure of the su(2) algebra is fully taken into account by the initial condition of the renormalization group flow. Our method is motivated by the spin-diagrammatic approach to quantum spin system that was developed more than half a century ago in a seminal work by Vaks, Larkin, and Pikin (VLP) [Sov. Phys. JETP 26, 188 (1968)]. By embedding their ideas in the language of the modern renormalization group, we avoid the complicated diagrammatic rules while at the same time allowing for novel approximation schemes. As a demonstration, we explicitly show how VLP's results for the leading corrections to the free energy and to the longitudinal polarization function of a ferromagnetic Heisenberg model can be recovered within the SFRG. Furthermore, we apply our method to the spin-S Ising model as well as to the spin-S quantum Heisenberg model, which allows us to calculate the critical temperature for both a ferromagnetic and an antiferromagnetic exchange interaction. Finally, we present a new hybrid formulation of the SFRG, which combines features of both the pure and the Hubbard-Stratonovich SFRG that were published recently [Phys. Rev. B 99, 060403(R) (2019)].
Ziel der vorliegenden Arbeit ist der Aufbau von koaxialen Plasmabeschleunigern und deren Verwendung für die Untersuchung der Eigenschaften von kollidierenden Plasmen. Zukünftig sollen diese kollidierenden Plasmen als intensive Strahlungsquelle im Bereich der ultravioletten (UV-) und vakuumultravioletten (VUV-)Strahlung sowie in der Grundlagenforschung als Target zur Ionenstrahl-Plasma-Wechselwirkung Verwendung finden. Für diese Anwendungen steht dabei eine Betrachtung der physikalischen Grundlagen im Vordergrund. So sind neben der Kenntnis der Plasmadynamik auch Aussagen bezüglich der Elektronendichte, der Elektronentemperatur und der Strahlungsintensität von Bedeutung. Im Einzelnen konnte gezeigt werden, dass es möglich ist, durch eine Plasmakollision die Elektronendichte des Plasmas im Vergleich zu der eines einzelnen Plasmas deutlich zu erhöhen - im Maximalfall um den Faktor vier. Gleichzeitig stieg durch die Plasmakollision die Lichtintensität im Wellenlängenbereich der UV- und VUV-Strahlung um den Faktor drei an...
High-energetic heavy-ion collisions offer the unique opportunity to produce and to study dense nuclear matter in the laboratory. The future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, will provide beams of heavy nuclei up to kinetic energies of 11 GeV/nucleon. At these energies, the nuclear matter in the collision zone of two nuclei will be compressed to densities of up to 5 − 10 times the saturation density of atomic nuclei, similar to matter densities existing in the core of massive neutron stars. Under those conditions, nucleons are expected to melt and form a new state of matter, which consists of quarks and gluons, the so called Quark-Gluon Plasma (QGP). The search for such a phase transition from hadronic to partonic matter, and the exploration of the nuclear matter equation-of-state at high densities are the major goals of heavy ion experiments worldwide.
The observables, which are proposed to probe the properties of dense nuclear matter and possible phase transitions, include multi-strange hyperons, antibaryons, lepton pairs, collective flow of identified particles, fluctuations and correlations of various particles, particles containing charm quarks, and hypernuclei. These observables have to be measured in multi-dimensions, i.e. as function of collision centrality, rapidity, transverse momentum, energy, emission angle, etc., which requires extremely high statistics. Moreover, some of these particles are produced very rarely.
Therefore, the Compressed Baryonic Matter (CBM) experiment at FAIR is designed to run at collision rates of up to 10 MHz, in order to perform measurements with unprecedented precision. Due to the complicated decay topology of many observables, no hardware trigger can be applied, and the data have to be analysed online in order to filter out the interesting events.
This strategy requires free-streaming read-out electronics, which provides time stamps to all detector signals, a high performance computer center, and high-speed reconstruction algorithms, which provide an online track and event reconstruction based on time and position information of the detector hits (”4-D“ reconstruction).
The core detector of the CBM experiment is the Silicon Tracking System (STS). The main task of the STS is to provide track reconstruction and momentum de- termination of charged particles originating from beam-target interactions. To fulfil the whole tasks the STS is located in the large gap of a superconducting dipole magnet with a bending power of 1 Tm providing momentum measurements for charged particles. The STS comprises 8 detector stations, which are positioned from 30 cm to 100 cm downstream the target. The corresponding active area of the stations grows up from 40×50 cm 2 up to 100×100 cm 2 with a totalarea of 4 m2. The silicon double-sided sensors exhibit 1024 strips on each side with a stereo angle at p-side of 7.5 ◦ and a strip pitch of 58 μm. The strip length ranges from 2 cm for sensors located in a close vicinity to the beam axis, up to 12 cm for other sensors where the flux of the reaction products drops down substantially. In total, the STS consist of 896 sensors mounted on 106 detector ladders. The detector readout electronics dissipates 40 kW and will be equipped with a CO 2 bi-phase cooling system. The detector including electronics will be mounted in a thermal enclosure to allow for sensor operation at below −5 ◦ C which minimizes radiation induced leakage currents.
The task of the STS is to measure the trajectories of up to 800 charged particles per collision with an efficiency of more than 95% and a momentum resolution of 1 − 2%. In order to guarantee the required performance over the full lifetime of the CBM experiment, the detector system has to have a low material budget, a high granularity, a high signal-to-noise (SNR) ratio, and a high radiation tolerance. As a result of optimisation studies, the STS consists of double-sided silicon microstrip sensors, about 300 μm thick, which have to provide a SNR ratio of more than 10, even after radiation with the expected equivalent lifetime fluence of 10 14 1 MeV n eq cm −2.
This thesis is devoted to the characterization of double-sided silicon microstrip sensors with an emphasis on investigation of their radiation hardness. Different prototypes of double sided silicon sensors produced by two vendors have been irradiated by 23 MeV protons up to the double life time fluence for the CBM experiment (2 × 10 14 1 MeV n eq cm −2 ).
The sensor properties have been characterised before and after irradiation. It was found, that after irradiation with a double lifetime fluence the leakage current increased 1000 times, which results in an increased shot noise. Moreover, the relative charge collection efficiency of irradiated with respect to non-irradiated sensors drops down to 85% for the lifetime equivalent fluence, and down to 73% for the double lifetime fluence, both for the p-side and n-side. For non-irradiated sensors the SNR was found to be in the range of 20 − 25, whereas for irradiated sensors it dropped down to 12 − 17.
In addition to the sensor characterization, a part of this thesis was devoted to the optimisation of the sensor readout scheme. In order to investigate the possible increase of SNR, and to reduce the number of readout channels in the outer aperture of STS, three versions of routing lines have been realized for the p-side readout of the sensor prototype, and have been tested in the laboratory and under beam conditions.
The tests have been performed with different inclination angles between beam direction and sensor surface, corresponding to the polar angle acceptance of the CBM experiment, which is from 2.5 ◦ to 25 ◦.
As a result of the studies carried out in this thesis work, the radiation hardness of the double-sided silicon microstrip sensors developed for the CBM STS detector was confirmed. Also the advantage of individual read-out of sensor channels in the lateral regions of the detector was verified. This allowed to start the tendering process for sensor series production in industry, an important step towards the construction of the detector in the coming years.