Refine
Year of publication
Document Type
- Doctoral Thesis (593) (remove)
Has Fulltext
- yes (593)
Is part of the Bibliography
- no (593) (remove)
Keywords
- Quark-Gluon-Plasma (8)
- Schwerionenphysik (8)
- CERN (5)
- Heavy Ion Collisions (5)
- Ionenstrahl (5)
- LHC (5)
- Monte-Carlo-Simulation (5)
- Quantenchromodynamik (5)
- Schwerionenstoß (5)
- Teilchenbeschleuniger (5)
Institute
- Physik (593) (remove)
Within this thesis, the mechanical integration of the Micro Vertex Detector (MVD) of the Compressed Baryonic Matter (CBM) experiment is developed. The CBM experiment, which is being set up at the future FAIR facility, aims to investigate the phase diagram of strongly interacting matter in the regime of high net-baryon densities and moderate temperatures. Heavy-ion collisions at beam energies in the range of 2 to 45 AGeV, complemented by results from elementary reactions, will allow access to these conditions. The experiments conducted at LHC (CERN, Switzerland) and at RHIC (BNL, USA = does not apply within the Beam Energy Scan program) so far focus on the investigation of the phase diagram in the regime of high temperatures and vanishing net-baryon densities. The high beam intensities provided by FAIR will enable CBM to focus its experimental program on systematical studies of rare particles. Among other particle species, open charm-carrying particles are one of the most promising observables to investigate the medium created in heavy-ion collisions since their charm quarks are exposed to the medium and traverse its whole evolution. The fact that the decay particles of these rare observables are also produced abundantly in direct processes in heavy-ion collisions results in a huge combinatorial background which attributes specific requirements to the detector systems. The call for a high interaction rate leads to a cutting-edge detector system which provides an excellent spatial resolution, thin detector stations and the capability to cope with the induced radiation as well as the high rate of traversing particles and the resulting track density. The required demands are to be implemented by the MVD which will be equipped with four planar stations positioned at 50, 100, 150 and 200 mm downstream the target. The geometrical acceptance, which has to be covered with charge-sensitive material, is defined according to the requirements of CBM in the polar angle range of [2.5°; 25°]. The MVD stations have to contribute as little as possible to the overall material budget. The expected beam intensity and the vicinity close to the target require silicon detectors that provide a hardness against non-ionizing radiation of more than 10^13 n_eq/cm² and against ionizing radiation of more than 1 Mrad. In addition, the read-out time of the sensors has to be as short as possible to avoid potential ambiguities in the particle tracking caused by the pile-up of hits having emerged from different collisions. For the time being, Monolithic Active Pixel Sensors (MAPS) offer the optimal choice of technology required to address the physics program of CBM with respect to the spectroscopy of open charm and di-electrons. The geometrical properties of these sensors define the layout of the detector. To limit the multiple scattering of the produced particles inside the geometrical acceptance, the sensors and the MVD have to operate in a moderate vacuum. The sensors are thinned down to a thickness of 50 µm and, to achieve a maximum polar angle coverage, they are glued onto both sides of dedicated thin carriers. These carriers, which are made of highly thermally conductive materials such as CVD diamond or encapsulated TPG, allow efficient extraction of the power produced in the sensors. This enables their operation at temperatures well below 0 °C as suggested by corresponding radiation hardness studies. Dedicated actively cooled aluminum-based heat sinks are positioned outside of the acceptance to dissipate the heat produced by the sensors and the front-end electronics. The design of the MVD, including the realistic thicknesses of the integrated materials, has been developed and refined in the context of this thesis. It has been transformed into a unique software model which is used to simulate and further optimize the mechanical and thermal properties of the MVD, as well as in sophisticated physics simulations. The model allowed evaluation of the material budget of each individual MVD station in its geometrical acceptance. The calculated averaged material budget values stay well below the material budget target values demanded by the physics cases. The thermal management of the MVD has been simulated on the level of a quadrant of each MVD station – four identically constructed quadrants are forming an MVD station – taking into account material properties of the sensors, the glue and the sensor carrier. The temperature gradients across the pixels of a given sensor area in the direction of the rows and columns were found to be in an acceptable range of below 5 K. A temperature difference between the thermal interface area and the maximum sensor temperature of dT = 5 K on the first and a value of dT = 40 K on the fourth MVD station has been thermally simulated assuming a sensor power dissipation of 0.35 W/cm², highlighting the need to optimize the thermal interface between the involved materials as well as the power dissipation of the sensors. The feasibility of several key aspects required for the construction phase of the MVD has been investigated within the MVD Prototype project. The construction of the MVD Prototype allowed evaluation, testing and validation of the handling and the double-sided integration of ultra-thin sensors – the required working steps for their integration have been specified, evaluated and successfully established – as well as their operation in the laboratory and during a concluding in-beam test using high-energetic pions provided by the CERN-SPS. The thermal characterization of the MVD Prototype during its operation – in a temperature range from [5 °C; 25 °C], not in vacuum – confirmed the corresponding thermal simulations conducted during its design phase and substantiated the results of the thermal simulations for the design of the MVD. The aim of a material budget value of only x/X_0 ~ 0.3% for the MVD Prototype has been accomplished. Analyzing the in-beam data, the nominal sensor performance parameters were successfully reproduced, demonstrating that the proposed integration process does not impair the sensors’ performance. Moreover, no evidence of potential impact on the sensors’ performance arising from mechanical weaknesses of the MVD Prototype mechanics has been found within the analyzed data. Based on the MVD Prototype and the simulations of the material budget as well as the thermal management, this thesis evaluated the work packages, procedures and quality assurance parameters needed to set up the starting version of the MVD and addressed open questions as well as critical procedures to be studied prior to the production phase of the detector, emphasizing the evaluation of the cooling concept in vacuum and the integration of sensors in ladder structures on both sides of the quadrants of the MVD stations.
Nanotechnology is a rapidly developing branch of science, which is focused on the study of phenomena at the nanometer scale, in particular related to the possibilities of matter manipulation. One of the main goals of nanotechnology is the development of controlled, reproducible, and industrially transposable nanostructured materials.
The conventional technique of thin-film growth by deposition of atoms, small atomic clusters and molecules on surfaces is the general method, which is often used in nanotechnology for production of new materials. Recent experiments show, that patterns with different morphology can be formed in the course of nanoparticles deposition process on a surface. In this context, predicting of the final architecture of the growing materials is a fundamental problem worth studying.
Another factor, which plays an important role in industrial applications of new materials, is the question of post-growth stability of deposited structures. The understanding of the post-growth relaxation processes would give a possibility to estimate the lifetime of the deposited material depending on the conditions at which the material was fabricated. Controllable post-growth manipulations with the architecture of deposited structures opens new path for engineering of nanostructured materials.
The task of this thesis is to advance understanding mechanisms of formation and post-growth evolution of nanostructured materials fabricated by atomic clusters deposition on a surface. In order to achieve this goal the following main problems were addressed:
1. The properties of isolated clusters can significantly differ from those of analogous clusters occurring on a solid surface. The difference is caused by the interaction between the cluster and the solid. Therefore, the understanding of structural and dynamical properties of an atomic cluster on a surface is a topic of intense interest from the scientific and technological point of view. In the thesis, stability, energy, and geometry of an atomic cluster on a solid surface were studied using a liquid drop approach which takes into account the cluster-solid interaction. Geometries of the deposited clusters are compared with those of isolated clusters and the differences are discussed.
2. The formation scenarios of patterns on a surface in the course of the process of cluster deposition depend strongly on the dynamics of deposited clusters. Therefore, an important step towards predicting pattern morphology is to study dynamics of a single cluster on a surface. The process of cluster diffusion on a surface was modeled with the use of classical molecular dynamics technique, and the diffusion coefficients for the silver nanoclusters were obtained from the analysis of trajectories of the clusters. The dependence of the diffusion coefficient on the system’s temperature and cluster-surface interaction was established. The results of the calculations are compared with the available experimental results for the diffusion coefficient of silver clusters on graphite surface.
3. The methods of classical molecular dynamics cannot be used for modeling the self-assembly processes of atomic clusters on a surface, because these processes occur on the minutes timescale, what would require an unachievable computer resource for the simulation. Based on the results of molecular dynamics simulations for a single cluster on a surface a Monte-Carlo based approach has been developed to describe the dynamics of the self-assembly of nanoparticles on a surface. This method accounts for the free particle diffusion on a surface, aggregation into islands and detachment from these islands. The developed method is allowed to study pattern formation of structures up to thousands nm, as well as the stability of these structures. Developed method was implemented in MBN Explorer computer package.
4. The process of the pattern formation on a surface was modeled for several different scenarios. Based on the analysis of results of simulations was suggested a criterion, which can be used to distinguish between different patterns formed on a surface, for example: between fractals or compact islands.This criteria can be used to predict the final morphology of a growing structure.
5. The post-growth evolution of patterns on a surface was also analyzed. In particular, attention in the thesis is payed to a systematical theoretical analysis of the post-growth processes occurring in nanofractals on a surface. The time evolution of fractal morphology in the course of the post-growth relaxation was analyzed, the results of these calculations were compared with experimental data available for the post-growth relaxation of silver cluster fractals on graphite substrate.
All the aforementioned problems are discussed in details in the thesis.
As its fundamental function, the brain processes and transmits information using populations of interconnected nerve cells alias neurons. The communication between these neurons occurs via discrete electric impulses called spikes. A core challenge in neuroscience has been to quantify how much information about relevant stimuli or signals a neuron transports in its spike sequences, or spike trains. The recently introduced correlation method allows to determine this so-called mutual information in terms of a neuron’s temporal spike correlations under certain stationarity assumptions. Based on the correlation method, I address several open questions regarding neural information encoding in the cortex.
In the first part (chapter 2), I investigate the role of temporal spike correlations for neural information transmission. Temporal correlations in neuronal spike trains diminish independence in the information that is transmitted by the different spikes and hence introduce redundancy to stimulus encoding. However, exact methods to describe how such spike correlations impact information transmission quantitatively have been lacking. Here, I provide a general measure for the information carried by spike trains of neurons with correlated rate modulations only, neglecting other spike correlations, and use it to investigate the effect of rate correlations on encoding redundancy. I derive it analytically by calculating the mutual information between a time correlated, rate-modulating signal and the resulting spikes of Poisson neurons. Whereas this information is determined by spike autocorrelations only, the redundancy in information encoding due to rate correlations depends on both the distribution and the autocorrelation of the rate histogram. I further demonstrate that, at very small signal strengths, the information carried by rate correlated spikes becomes identical to that of independent spikes, in effect measuring the rate modulation depth. In contrast, a vanishing signal correlation time maximizes information transmission but does not generally yield the information of independent spikes.
In the second part (chapter 3), I analyze the information transmission capabilities of two particular schemes of encoding stimuli in the synaptic inputs using integrate-and-fire neuron models. Specifically, I calculate the exact information contained in spike trains about signals which modulate either the mean or the variance of the somatic currents in neurons, as is observed experimentally. I show that the information content about mean modulating signals is generally substantially larger than about variance modulating signals for biological parameters. This result provides evidence, by means of exact calculations of the mutual information, against the potential benefit of variance encoding that had been suggested previously.
Another analysis reveals that higher information transmission is generally associated with a larger proportion of nonlinear signal encoding. Moreover, I show that a combination of signal-dependent mean and variance modulations of the input current can synergistically benefit information transmission through a nonlinear coupling of both channels. On a more general level, I identify what was previously considered an upper bound as the exact, full mutual information. Furthermore, by analyzing the statistics of the spike train Fourier coefficients, I identify the means of the Fourier coefficients as information-carrying features.
Overall, this work contributes answers to central questions of theoretical neuroscience concerning the neural code and neural information transmission. It sheds light on the role of signal-induced temporal correlations for neural coding by providing insight into how signal features shape redundancy and by establishing mathematical links between existing methods and providing new insights into the spike train statistics in stationary situations. Moreover, I determine what fraction of the mutual information is linearly decodable for two specific signal encoding schemes.
Die vorliegende Arbeit befasst sich mit der Entwicklung einer spektroskopischen Methode für die medizinische Diagnostik und zielt auf die Einführung neuer analytischer Methoden in die klinische Praxis, die eine höhere Qualität bei der Behandlung von Patienten sowie eine Kostensenkung versprechen. Es wird eine reagenzienfreie infrarotspektroskopische Messmethode vorgestellt, mit der die Konzentrationen bestimmter Inhaltsstoffe von Körper- und anderen Flüssigkeiten quantitativ bestimmt werden können. Dabei kommt das kommerzielle FTIR- (Fourier-Transform Infrarot-) Spektrometer ALPHA der Firma Bruker zum Einsatz, für das eine spezielle ATR- (Abgeschwächte Totalreflexion) Messzelle konstruiert wurde. Diese eignet sich sowohl für Durchflussmessungen bei Volumenströmen von bis zu 1 l/min als auch für diskrete Proben mit einem minimalen Volumen von 10 µl. Die Kombination aus Spektrometer und Messzelle stellt somit ein kompaktes Messgerät dar, das zur Steuerung und Auswertung lediglich einen Computer benötigt und dessen Stabilität ebenfalls Langzeitmessungen erlaubt. Es stellt damit eine Basis für ein neuartiges Medizingerät dar, das auch außerhalb der Laborumgebung und insbesondere in der klinischen Routine von ungeschultem Personal eingesetzt werden kann.
Die quantitative Auswertung der Spektren erfolgt mittels multivariater Kalibrierung und PLS (Partial Least Squares) Regression. Dabei werden für die unterschiedlichen Inhaltsstoffe entsprechende Kalibriermodelle verwendet, die aus einer Reihe sorgfältig ausgewählter Proben erstellt wurden. Die Auswahl bezieht sich dabei vor allem auf einen breiten Konzentrationsbereich und auf möglichst unabhängig voneinander schwankende Konzentrationswerte der Inhaltsstoffe. Es wurden daher sowohl Proben im physiologischen als auch im pathologischen Bereich verwendet. Da die Konzentrationswerte der Kalibrierproben bekannt sein müssen, wurden die Proben mittels konventioneller klinischer Methoden analysiert. Die Genauigkeit dieser Referenzanalytik begrenzt dabei die maximale Genauigkeit der vorgestellten Methode.
Im Rahmen dieser Arbeit wurden Kalibriermodelle für die Inhaltsstoffe Glucose, Harnstoff, Creatinin und Lactat in der Waschlösung bei der Hämodialyse (Dialysat) sowie für die Inhaltsstoffe Glucose, Harnstoff, Cholesterol, Triacylglyceride, Albumin und Gesamtprotein in Vollblut und ebenso für Hämoglobin und Immunglobulin G in hämolysiertem Vollblut erstellt. Im Fall von Dialysat wurden hierfür sowohl künstlich erstellte sowie auch bei realen Dialysebehandlungen von Patienten entnommene Proben verwendet. Für Vollblut wurden bestehende Spektren an das neue Messgerät angepasst und durch Spektren neuer Blutproben erweitert. Die hiermit erreichte Genauigkeit und Präzision genügt in den meisten Fällen bereits klinischen Ansprüchen.
Für Dialysat wird gezeigt, dass mit dem vorgestellten Aufbau bereits kontinuierliche inline-Messungen direkt am Patienten möglich sind und gute Ergebnisse liefern. Dabei wurde sowohl auf eine einfache Anwendbarkeit während der Dialysebehandlung als auch auf eine einfache Bedienung mittels der vorgestellten Software geachtet. Das Gerät lässt sich somit problemlos in den klinischen Alltag integrieren und bietet aufgrund der Reagenzienfreiheit eine kostengünstige Methode zur kontinuierlichen und regelmäßigen Überwachung der Behandlungsverläufe.
Im Fall von Vollblut wird gezeigt, dass Messungen mit einer Probenmenge von 10 µl beispielsweise aus der Fingerbeere prinzipiell möglich sind und ebenfalls reproduzierbare Ergebnisse liefern. Damit steht eine präzise, einfache, kompakte und betriebskostengünstige Methode zur Verfügung, um in kurzer Zeit wichtige Blutparameter quantitativ bestimmen zu können.
Das kompakte und reagenzienfreie Messsystem erlaubt eine Vielzahl von Anwendungen, die insbesondere von den schnellen Analyseergebnissen und den geringen Verbrauchskosten profitieren. Beispielsweise beim Blutspendedienst, beim Hausarzt oder in Seniorenheimen kann die schnelle und einfache Ermittlung der hier untersuchten Blutparameter zur ersten Beurteilung des Patienten dienen und damit die Diagnose erleichtern. Der hohe Probendurchsatz und die vernachlässigbaren Betriebskosten führen in diesem Fall zu einer schnellen Amortisierung der Anschaffungskosten. Auch in Apotheken kann mit einem derartigen System ein erweiterter Service für Kunden angeboten werden.
Aufgrund des geringen Probenvolumens kommt das Messsystem ebenfalls für Anwendungen im Versuchstierbereich in Frage, beispielsweise für die Untersuchung von Mäuseblut in der German Mouse Clinic am Helmholtz Zentrum München. Die der Maus zu entnehmende Blutmenge und damit die Belastung des Tieres kann hierdurch erheblich reduziert werden.
Die Kompaktheit dieses universellen Systems erlaubt es weiterhin, eine Vielzahl anderer Flüssigkeiten zu untersuchen, die bereits erfolgreich infrarotspektroskopisch analysiert wurden. Dazu gehört unter anderem Urin, Bier und Wein.
In der Arbeit wird abschließend ebenfalls gezeigt, dass der Einsatz abstimmbarer Quantenkaskadenlaser zusammen mit der ATR-Technik prinzipiell die Möglichkeit eröffnet, die aufwändigen und teuren FTIR-Spektrometer zu ersetzen. Langfristig ist sowohl mit einer Verkleinerung des Aufbaus als auch mit einem Sinken des derzeit noch sehr hohen Anschaffungspreises zu rechnen. Der bereits verfügbare Abstimmbereich genügt zur Bestimmung der Glucosekonzentration. Eine Erweiterung, beispielsweise durch die Verwendung mehrerer Quantenkaskadenlaser mit unterschiedlichem Abstimmbereich, ermöglicht die Untersuchung weiterer Parameter.
Im Rahmen dieser Arbeit wird ein Experiment vorgestellt, mit dem es möglich ist, die Wechselwirkungen zwischen Elektronen in der Gegenwart eines extrem starken Laserfeldes zu untersuchen. Diese resultieren aus der nichtsequentiellen Multiphoton- Doppelionisation von Neon in einem starken elektrischen Feld, das durch einen Hochleistungslaser erzeugt wird. Mit Hilfe der COLTRIMS-Technologie ist es möglich die entstandenen Teilchen nachzuweisen und die Impulskomponenten zu bestimmen. Bei dieser Technologie handelt es sich um ein „Mikroskop“, das atomphysikalische Prozesse vollständig differntiell beobachtet. Die bei der Doppelionisation entstandenen Elektronen und das Rückstossion werden mittels eines schwachen elektrischen Feldes auf orts- und zeitaufgelöste Multichannelplate-Detektoren mit Delaylineauslese geleitet. Zusätzlich wird noch ein magnetisches Feld überlagert. Aus dem Auftreffort und der Flugzeit der Teilchen können die Impulse bestimmt werden. Es ist erstmals möglich die Impulskomponenten der drei Raumrichtungen für alle an der Ionisation beteiligten Teilchen mit hinreichend guter Auflösung zu bestimmen. Es können vollständige differentielle Winkelverteilungen erzielt werden. Damit gelingt es, ein kinematisch vollständiges Experiment zu realisieren. Die Elektronen werden bevorzugt in Richtung des Polarisationsvektors des Laserlichtes emittiert. Aufgrund der guten Impulsauflösung ist es jetzt möglich, die Richtung senkrecht zur Polarisation zu untersuchen und die Erkenntnisse in Bezug zueinander zu bringen. Das der nichtsequentiellen Doppelionisation zu grunde liegende sehr anschauliche Modell ist der „Rescattering-Prozess“: Das Laserfeld koppelt an das Coulombpotential des Atoms und verformt es derart, dass ein Elektron die effektive Potentialbarriere überqueren oder durch diese durchtunneln kann. Dieses zuerst befreite Elektron wird durch das oszillierende elektromagnetische Feld zunächst vom Ursprungsion fortgetrieben. Kehrt aber die Phase des Laserfeldes um, wird es zurück zum Ion beschleunigt, nimmt dabei Energie aus dem Feld auf und kann durch Elektron-Elektron-Stossionisation ein zweites Elektron aus dem Atom ionisieren oder es können kurzzeitige Anregungszustände erzeugt werden, die später feldionisiert werden. Dieses Modell wurde schon durch ein Vielzahl von Experimenten verifiziert. Gleichzeitig wirft es aber auch Fragen auf: Wie sind die Elektron-Elektron-Korrelationen zu erklären? Wie hängt der Longitudinal- mit dem Transversalimpuls zusammen? Welche Ionisationsmechanismen treten wann auf? Zusammenfassend kann man sagen, dass ein Experiment präsentiert wird, das zur Erfoschung von Korrelationseffekten bei Multiphoton-Ionisation beiträgt und sehr detaillierte Einblicke in die Welt der Laseratomphysik gewährt. Die Daten belegen eindeutig, dass eine Messung der korrelierten Impulse mehrerer Teilchen in einem Laserfeld eine Zeitmessung mit einer Auflösung weit unter einer Femtosekunde ermöglicht. Das beobachtete Ein- und Ausschalten der Elektronenabstossung, je nach der über die Longitudinal-Impulskorrelation gemessenen Verzögerungszeit, zeigt die Möglichkeit „Attosekunden Physik ohne Attosekunden-Pulse“ zu betreiben.
Im Weltall existieren hunderte sehr helle Objekte, die eine hohe konstante Leuchtkraft im Wellenlängenbereich von Gammastrahlung besitzen. Die konstante Leuchtkraft mancher dieser Objekte wird in regelmäßigen Abständen von starken Ausbrüchen, den sogenannten X-Ray-Bursts, unterbrochen. Hauptenergiequelle dieser X-RayBursts ist der „rapid-proton-capture“-Prozess (rp-Prozess). Dieser zeichnet sich durch eine Abfolge von (p,γ)-Reaktionen und β+-Zerfällen aus, die die charakteristischen Lichtkurven produzieren. Für viele am Prozess beteiligte Reaktionen ist der Q-Wert sehr klein, wodurch die Rate der einzelnen Reaktionen von den resonanten Einfängen in die ungebundenen Zustände dominiert wird. Die Unsicherheiten in der Beschreibung der Lichtkurve sind derzeit aufgrund fehlender kernphysikalischer Informationen von vielen am Prozess beteiligten Isotopen sehr groß. Sensitivitätsstudien zeigen, dass dabei die Unsicherheiten der 23Al(p,γ)24Si-Reaktion eine der größten Auswirkungen auf die Lichtkurve hat. Diese werden durch ungenaue und widersprüchliche Informationen zu den ungebundenen Zuständen im kurzlebigen 24Si hervorgerufen.
Um Informationen über die Kernstruktur von 24Si zu erhalten, wurde am National Superconducting Cyclotron Laboratory (NSCL), Michigan, USA, die 23Al(d,n)24Si Transferreaktion untersucht. Der in dieser Form erstmals umgesetzte Versuchsaufbau bestand aus einem Gammadetektor zur Messung der Übergangsenergien des produzierten 24Si, einem Neutronendetektor zur Messung der Winkelverteilung der emittierten Neutronen und einem Massensprektrometer zur Identifikation des produzierten Isotops. Mit diesem Aufbau, der eine Detektion der kompletten Kinematik der (d,nγ)-Reaktion ermöglichte, konnten folgende Erkentnisse gewonnen werden:
Aus der Energie der nachgewiesenen Gammas konnten die Übergänge zwischen den Kernniveaus von 24Si bestimmt und daraus die Energien der einzelnen Zustände ermittelt werden. Dabei konnte neben dem bereits bekannten gebundenen 2+-Zustand (in dieser Arbeit gemessen bei 1874 ± 2,9keV) und dem ungebundenen 2+-Zustand (3448,8 ± 4,6keV), erstmals ein weiterer ungebundener (4+,0+)-Zustand bei 3470,6 ± 6,2 keV beobachtet werden. Zusätzlich konnte die Diskrepanz, die bezüglich der Energie des ungebundenen 2+-Zustands aufgrund früherer Messungen bestand, beseitigt und die Energieunsicherheit reduziert werden.
Aus der Anzahl der nachgewiesenen Gammas konnten ebenfalls die (d,n)-Wirkungsquerschnitte in die einzelnen Zustände von 24Si bestimmt werden. Unter Verwendung der Ergebnisse von DWBA-Rechnungen konnte mithilfe dieser die spektroskopischen Faktoren berechnet werden. Für die angeregten Zustände musste dabei zwischen verschiedenen Drehimpulsüberträgen unterschieden werden. Mittels der Winkelverteilung der nachgewiesenen Neutronen konnte gezeigt werden, dass die Gewichtung anhand der theoretischen spektroskopischen Faktoren zur Berechnung der Anteile des jeweiligen Drehimpulsübertrags am gesamten Wirkungsquerschnitt für den entsprechenden Zustand gute Ergebnisse liefert. Für eine quantitative Bestimmung der spektroskopischen Faktoren der Zustände anhand der Neutronenwinkelverteilungen in 24Si war allerdings die Statistik zu gering. Für den Fall der deutlich häufiger beobachteten 22Mg(d,n)23Al-Reaktion konnte hingegen ein spektroskopischer Faktor für den 23Al-Grundzustand von 0,29 ± 0,04 bestimmt werden. Abschließend wurden die Auswirkungen der gewonnenen Erkenntnisse zur Kernstruktur von 24Si auf die Rate der 23Al(p,γ)-Reaktion untersucht. Dabei konnte aufgrund der besseren Energiebestimmung zum einen die Diskrepanz zwischen den Raten die auf Grundlage der beiden früheren Untersuchungen berechnet wurden und bis zu einem Faktor von 20 voneinander abweichen, beseitigt werden. Zum anderen konnte aufgrund der kleineren Unsicherheit in der Energiebestimmung der Fehlerbereich der Rate verkleinert werden. Die Untersuchungen zeigen, dass die Unsicherheit in der neuen Rate von der Ungenauigkeit der Massenbestimmung der beiden beteiligten Isotope und damit dem Q-Wert der Reaktion dominiert wird. Durch eine bessere Bestimmung des Q-Werts könnte die Unsicherheit in der Rate aufgrund der neuen experimentellen Ergebnisse auf ein Zehntel gesenkt werden.
The diffusive behavior of macromolecules in solution is a key factor in the kinetics of macromolecular binding and assembly, and in the theoretical description of many experiments. Experiments on high-density protein solutions have found that a slow down of the diffusion dynamics is larger than expected from colloidal theory for non-interaction hard-spheres. It has also been shown that the rotational diffusion anisotropy in high-density protein solutions is larger than in dilute ones. High-density protein solutions are a complex fluid that is different from the neat fluid assumption used in the hydrodynamic theory. It is therefore important to have methods to accurately calculate the translational and rotational diffusion tensor from simulations as well as simulation algorithms to explore high-density solutions.
Simulations provide a powerful tool to study diffusion in complex fluids. They can be used to study the macroscopic and microscopic effects of complex fluids on the diffusive behavior. There has been already a lot of work done to accurately simulate diffusion and to determine the diffusion coefficients from simulations.
The translational diffusion of molecules in simple and complex liquids can be determined with high accuracy from simulations. This is not yet the case for rotational diffusion. Existing algorithms to calculate the rotational diffusion coefficients from simulations make assumptions about the shape of the protein or only work at short times. For the simulation of diffusive behavior of macromolecules two options exist today. An all-atom integrator with explicit solvent molecules or coarse-grained (CG) simulations with an implicit solvent. CG simulations of dynamic behavior with implicit solvent are also called Brownian dynamics (BD) simulations. For the CG simulations the Ermak-McCammon algorithm is often used to solve the underlying Langevin equation. The algorithm is an extension of the Euler-Maruyama integrator to include translation and rotation in three dimensions. This algorithm only correctly reproduces the equilibrium probability for short time-steps and the error depends linearly on the time-step. It has been shown that Monte Carlo based algorithms can produce BD for translational dynamics, when appropriately parametrized. The advantage of Monte Carlo based algorithm is that they will reproduce the correct equilibrium distribution independent of the chosen time-step. This in return allows choosing larger time-steps in simulations. The aim of this thesis is to develop novel´methods to accurately determine the rotational diffusion coefficient from simulations and extend existing Monte Carlo algorithms to include rotational dynamics.
The first project addresses the question of how to accurately determine the rotational diffusion coefficients from simulations. We develop a quaternion based method to calculate the rotational diffusion tensor from simulations and a theory for the effects of periodic boundary conditions (PBC) on the rotational diffusion coefficient in simulations.
Our method for calculating rotational diffusion coefficients is based on the quaternion covariances from Favro for a freely rotating rigid molecule. The covariances as formulated by Favro are only valid in the principal coordinate system (PCS) of the rotation diffusion tensor. The covariances can be generalized for an arbitrary reference coordinate system (RCS), i.e., a simulation, given the principle axes of the rotational diffusion tensor in the RCS. We show that no prior knowledge of the diffusion tensor and its principal axes is required to calculate the generalized covariances from simulations using common root-mean-square distance (RMSD) procedures. We develop two methods to fit the covariances calculated from simulations to our generalized equations to fit the rotational diffusion tensor. In the first method we minimize the sum of the squared error deviations between model and simulation data. For this six dimensional optimization we use a simulated annealing algorithm. Alternatively the rotational diffusion tensor can also be determined from a eigenvalue decomposition of covariance after integration. To minimize the effects of sampling noise in the integration we first apply a Laplace-transformation to smooth the covariances at large times. For ideal sampling the resulting rotational diffusion coefficient should be independent of the value of the Laplace variable. In practice, however, the best results are achieved using a value close to the inverse autocorrelation time of the rotational motion.
...
The characterization of microscopic properties in correlated low-dimensional materials is a challenging problem due to the effects of dimensionality and the interplay between the many different lattice and electronic degrees of freedom. Competition between these factors gives rise to interesting and exotic magnetic phenomena. An understanding of how these phenomena are driven by these degrees of freedom can be used for rational design of new materials, to control and manipulate these degrees of freedom in order to obtain desired properties. In this work, we study these effects in materials with small exchange interaction between the magnetic ions such as metal-organic and inorganic dilute compounds. We overcome the dfficulties in studying these kind of materials by combining classical and quantum mechanical ab initio methods and many-body theory methods in an effective theoretical approach. To treat metal-organic compounds we elaborate a novel two-step methodology which allows one to include quantum effects while reducing the computational cost. We show that our approach is an effective procedure, leading at each step, to additional insights into the essential features of the phenomena and materials under study. Our investigation is divided into two parts, the first one concerning the exploration of the fundamental physical properties of novel Cu(II) hydroquinone-based compounds. We have studied two representatives of this family, a polymeric system Cu(II)-2,5-bis(pyrazol-1-yl)-1,4-dihydroxybenzene (CuCCP) and a coupled system Cu2S2F6N8O12 (TK91). The second part concerns the study of magnetic phenomena associated with the interplay between different energy scales and dimensionality in zero-, one- and two-dimensional compounds. In the zero-dimensional case, we have performed a comprehensive study of Cu4OCl6L4 with L=diallylcyanamide=NC-N-(CH2-CH=CH2)2 (Cu4OCl6daca4). Interpretations of the magnetic properties for this tetrameric compound have been controversial and inconsistent. From our studies, we conclude that the common models usually applied to this and other representatives in the same family of cluster systems fail to provide a consistent description of their low temperature magnetic properties and we thus postulate that in such systems it is necessary to take into account quantum fluctuations due to possible frustrated behavior. In the one-dimensional case, we studied polymeric Fe(II)-triazole compounds, which are of special relevance due to the possibility of inducing a spin transition between low and high spin state by applying a external perturbation. A long standing problem has been a satisfactory microscopic explanation of this large cooperative phenomenon. A lack of X-ray data has been one mitigating reason for the absence of microscopic studies. In this work, we present a novel approach to the understanding of the microscopic mechanism of spin crossover in such systems and show that in these kind of compounds magnetic exchange between high spin Fe(II) centers plays an important role. The correct description of the underlying physics in many materials is often hindered by the presence of anisotropies. To illustrate this difficulty, we have studied a two dimensional dilute compound K2V3O8 which exhibits an unusual spin reorientation effect when applying magnetic fields. While this effect can be understood when considering anisotropies in the system, it is not sufficient to reproduce experimental observations. Based on our studies of the electronic and magnetic properties in this system, we predict an extra exchange interaction and the presence of an additional magnetic moment at the non-magnetic V site. This sheds a new light into the controversial recent experimental data for the magnetic properties of this material.
The term superconductivity describes the phenomenon of vanishing electrical resistivity in a certain material, then called a superconductor, below a critical typically very low temperature. Since the discovery of superconductivity in mercury in 1911 many other superconductors have been found and the critical temperature below which superconductivity occurs could recently be raised to the temperatures encountered in a cold antarctic winter.
Superconductors are promising materials for applications. They can serve as nearly loss-free cables for energy transmission, in coils for the generation of high magnetic fields or in various electronic devices, such as detectors for magnetic fields. Despite their obvious advantages, the cost for using superconductors, however, depends a lot on the cooling effort needed to realize the superconducting state. Therefore, the search for a superconductor with critical temperature above room-temperature, which would avoid the need for any specialized cooling system, is one of the main projects of contemporary research in condensed matter physics.
While a theory of superconductivity in simple metals has already been developed in the 1950s, it has meanwhile been recognized that many superconductors are unconventional in the sense that their behavior does not follow the aforementioned theory. Unconventional superconductors differ from conventional superconductors mainly by the momentum- and real-space symmetry of the order parameter, which is associated with the superconducting state. While conventional superconductors have a uniform order parameter, unconventional superconductors can have an order parameter that bears structure. Of course, alternative theoretical descriptions have been suggested, but the discussion on the right theory for unconventional superconductivity has not yet been settled. Ultimately, this lack of a general theory of superconductivity prevents a targeted search for the room-temperature superconductor. Any new theoretical approach must, however, prove its value by correctly predicting the structure of the superconducting order parameter and further material properties.
In this work we participate in the search for a theory of unconventional superconductivity. We discuss the theory of superconductivity mediated by electron-electron interactions, which has been popular in the last few decades due to its success in explaining various properties of the copper-based superconductors that emerged in the 1980s. We give a detailed derivation of the so-called random phase approximation for the Hubbard model in terms of a diagrammatic many-body theory and apply it in conjunction with low-energy kinetic Hamiltonians, which we construct from first principles calculations in the framework of density functional theory. Density functional theory is an established technique for calculating the electronic and magnetic properties of materials solely based on their crystal structure. Its practical implementations in computer codes, however, do for example not describe complicated many-electron phenomena like the superconducting state that we are interested in here. Nevertheless, it can provide important information about the properties of the normal state of the material, which superconductivity emerges from. In our theory we use these information and approach the superconducting state from the normal state.
Such an interfacing of different calculational techniques requires a lot of implementation work in the form of computer code. Inclusion of the computer code into this work would consume by far too much space, but since some of the decisions on approximations in the calculational formalism are guided by the feasibility of the associated computer calculations, we discuss the numerical implementation in great detail.
We apply the developed methods to quasi-two-dimensional organic charge transfer salts and iron-based superconductors. Finally, we discuss implications of our findings for the interpretation of various experiments.
High-resolution, compactness, scalability, efficiency – these are the critical requirements which imaging radar systems have to fulfil in applications such as environmental monitoring, cloud mapping, body sensing or autonomous driving. This thesis presents a modular millimetre-wave frequency modulated continuous-wave (FMCW) radar front-end solution intended for such applications. High-resolution is achieved by enlarging the operating frequency band of the radar system. This can be realized at millimetre-wave frequencies due to the large spectrum availability. Furthermore, the size of components decreasing with increasing frequency makes millimetre-wave systems a good candidate for compactness. However, the full integration of radar front-ends is a challenge at millimetre-wave frequencies due to poor signal integrity and spectral purity, which are essential for imaging applications. The proposed radar uses an alternative technique and tackles this limitation by featuring highly-integrable architectures, specifically the Hartley architecture for signal conversion and enhanced push-pull amplifier for harmonic suppression. The resolution of imaging radars can be further improved by increasing the number of transmitters and receivers. This has spurred the investigation of spectrum, time and energy-efficient multiplexing techniques for multi-input multi-output (MIMO) radar systems. The FMCW radar architecture proposed in this thesis is based on code-division technique using intra-pulse, also called intra-chirp modulation. This advanced scalable and non-complex solution, made possible by the latest achievements on direct digital synthesis for signal generation, guarantees signal integrity and compact size implementation. The proposed architecture is investigated by a thorough system analysis. A transmitter module and a receiver module for a 35 GHz imaging radar prototype are designed, fabricated and fully characterized to validate the feasibility of our novel approach for high-resolution highly-integrated MIMO front-ends.
Das Ziel der vorliegenden Arbeit war, die systematischen Anfangsverluste im SIS18 zu minimieren. Das SIS18 soll als Injektor für das SIS100 in der neuen geplanten FAIR-Anlage eingesetzt werden und dafür die Strahlintensität erhöht werden. Eine wesentliche Rolle spielen das dynamische Vakuum im SIS18 und die anfänglichen Strahlverluste, verursacht durch Multiturn-Injektions- (MTI) oder HF-Einfangsverluste. Um den dynamischen Restgasdruck im SIS18 zu stabilisieren, müssen diese systematischen Anfangsverluste minimiert werden. Strahlteilchen, welche auf der Vakuumkammerwand verloren gehen, führen durch ionenstimulierte Desorption zu einem lokalen Druckanstieg. Dies wiederum erhöht die Wahrscheinlichkeit für Stöße zwischen Restgasteilchen und Strahlionen, wodurch diese umgeladen werden können und nach einem dispersiven Element (Dipol) auf der Vakuumkammer verloren gehen. Dies produziert einen weiteren lokalen Druckanstieg und verursacht eine massive Erhöhung der Umladungsraten. Eine Möglichkeit, die anfänglichen Verluste zu minimieren bzw. zu kontrollieren, ist die MTI-Verluste auf den Transferkanal (TK) zu verlagern, da dort ein Druckanstieg den umlaufenden Strahl im SIS18 nicht stört. Im Transferkanal werden die Strahlränder mit Hilfe von Schlitzen beschnitten und somit eine scharf definierte Phasenraumfläche erzeugt. ...
This thesis is devoted to the developement of a classical model for the study of the energetics and stability of carbon nanotubes. The motivation behind such a model stems from the fact that production of nanotubes in a well-controlled manner requires a detailed understanding of their energetics. In order to study this different theoretical approaches are possible, ranging from the computationally expensive quantum mechanical first principle methods to the relatively simple classical models. A wisely developed classical model has the advantage that it could be used for systems of any possible size while still producing reasonable results. The model developed in this thesis is based on the well-known liquid drop model without the volume term and hence we call it liquid surface model. Based on the assumption that the energy of a nanotube can be expressed in terms of its geometrical parameters like surface area, curvature and shape of the edge, liquid surface model is able to predict the binding energy of nanotubes of any chirality once the total energy and the chiral indices of it are known. The model is suggested for open end and capped nanotubes and it is shown that the energy of capped nanotubes is determined by five physical parameters, while for the open end nanotubes three parameters are sufficient. The parameters of the liquid surface model are determined from the calculations performed with the use of empirical Tersoff and Brenner potentials and the accuracy of the model is analysed. It is shown that the liquid surface model can predict the binding energy per atom for capped nanotubes with relative error below 0.3% from that calculated using Brenner potential, corresponding to the absolute energy difference being less than 0.01 eV. The influence of the catalytic nanoparticle on top of which a nanotube grows, on the nanotube energetics is also discussed. It is demonstrated that the presence of catalytic nanoparticle changes the binding energy per atom in such a way that if the interaction of a nanotube with the catalytic nanoparticle is weak then attachment of an additional atom to a nanotube is an energetically favourable process, while if the catalytic nanoparticle nanotube interaction is strong , it becomes energetically more favourable for the nanotube to collapse. The suggested model gives important insights in the energetics and stability of nanotubes of different chiralities and is an important step towards the understanding of nanotube growth process. Young modulus and curvature constant are calculated for single-wall carbon nanotubes from the paremeters of the liquid surface model and demonstrated that the obtained values are in agreement with the values reported earlier both theoretically and experimentally. The calculated Young modulus and the curvature constant were used to conclude about the accuracy of the Tersoff and Brenner potentials. Since the parameters of the liquid surface model are obtained from the Tersoff and Brenner potential calculations, the agreement of elastic properties derived from these parameters corresponds to the fact that both potentials are capable of describing the elastic properties of nanotubes. Finally, the thesis discuss the possible extension of the model to various systems of interest.
Fast nuclei are ionizing radiation which can cause deleterious effects to irradiated cells. The modelling of the interactions of such ions with matter and the related effects are very important to physics, radiobiology, medicine and space science and technology. A powerful method to study the interactions of ionizing radiation with biological systems was developed in the field of microdosimetry. Microdosimetry spectra characterize the energy deposition to objects of cellular size, i.e., a few micrometers.
In the present thesis the interaction of ions with tissue-like media was investigated using the Monte Carlo model for Heavy-Ion Therapy (MCHIT) developed at the Frankfurt Institute for Advanced Studies. MCHIT is a Geant4-based application intended to benchmark the physical models of Geant4 and investigate the physical properties of therapeutic ion beams. We have implemented new features in MCHIT in order to calculate microdosimetric quantities characterizing the radiation fields of accelerated nucleons and nuclei. The results of our Monte Carlo simulations were compared with recent experimental microdosimetry data.
In addition to microdosimetry calculations with MCHIT, we also investigated the biological properties of ion beams, e.g. their relative biological effectiveness (RBE), by means of the modified Microdosimetric-Kinetic model (MKM). The MKM uses microdosimetry spectra in describing cell response to radiation. MCHIT+MKM allowed us to study the physical and biological properties of ion beams. The main results of the thesis are as follows:
MCHIT is able to describe the spatial distribution of the physical dose in tissue-like media and microdosimetry spectra for ions with energies relevant to space research and ion-beam cancer therapy; MCHIT+MKM predicts a reduction of the biological effectiveness of ions propagating in extended medium due to nuclear fragmentation reactions; We predicted favourable biological dose-depth profiles for monoenergetic helium and lithium beams similar to the one for carbon beam. Well-adjusted biological dose distributions for H-1, He-4, C-12 and O-16 with a very flat spread-out Bragg peak (SOBP) plateau were calculated with MCHIT+MKM; MCHIT+MKM predicts less damage to healthy tissues in the entrance channel for SOBP He-4 and C-12 beams compared to H-1 and O-16 ones. No definitive advantages for oxygen ions with respect to carbon were found.
This thesis presents a model for the dynamical description of deconfined quark matter created in ultra-relativistic heavy ion collisions, treating quarks and antiquarks as classical point particles subject to a colour-dependent, Cornell-type potential interaction. The model provides a dynamical handle for hadronization via the recombination of quarks and antiquarks in colour neutral clusters. Gluons are not included explicitly in the model,but are described in an effective manner by the means of the potential interaction. The model includes four different quark flavours (up, down, strange and charm) and uses current masses for the quarks. The dynamical evolution of a system of colour charges subject to the Hamiltonian equations of motion of the model yields the formation of colour neutral clusters of quarks and antiquarks, which are subject only to a small remaining interaction, the strong interquark potential notwithstanding. These clusters can be mapped onto hadrons and hadronic resonances. Thus, the model allows a dynamical description of quarks degrees of freedom in heavy ion collisions, including a recombination scheme for hadronization. The thermal properties of the model turn pout to be very satisfying. The model shows a transition from a confining phase to a deconfined phase with rising temperature, going hand in hand with a softest point in the equation of state and a rise of energy density and pressure to the Stefan-Boltzmann limit of a gas of quarks and antiquarks. Moreover, the potential interaction is screened in the deconfined phase. For the dynamical description of ultra-relativistic heavy ion collision, the qMD model is coupled to UrQMD as a generator for its initial conditions. In this way, a fully dynamical description of the expansion and hadronization of the fireball created in such collisions can be achieved. Non-equilibrium aspects of the expansion dynamics and hadronization by recombination of quarks and antiquarks are discussed in detail, and a comparison with experimental data of collisions at the CERN-SPS is presented. The big advantage of the qMD model is the possibility to study cluster formation, including exotic clusters, and fluctuations in a dynamical manner. As an example, event-by-event fluctuations in electric charge are studied. Such fluctuations have been proposed as a clear criterion to distinguish a deconfined system from a hadrons gas. However, experimental data show hadron gas fluctuation measures even at RHIC, where deconfinement is taken for granted. We will see how the dynamics of quark recombination washes out the quark-gluon plasma signal in the fluctuation criterion. Moreover, we will discuss briefly the problem of entropy at recombination. In a second application, the formation of exotic hadronic clusters, larger than usual mesons and baryons, is studied. Such clusters could provide new measures for the thermalization and homogenization of a deconfined gas of colour charges. Moreover, number estimates for exotic clusters from recombination are considerably lower than corresponding predictions from thermal models, providing a clear difference between statistical hadronization and hadronization via quark recombination. A detailed analysis is provided for pentaquark candidates such as the Theta-Plus. It turns out that the distribution of exotic states over strangeness, isospin, and spin could provide a sensitive measure for thermalization and decorrelation in the deconfined quark phase, if it could be measured.
Measurements of the transverse momentum (pt) spectra of K0 s and Λ(Λ̄) in Pb–Pb and pp collisions at √sNN = 2.76TeV with the ALICE detector at the LHC at CERN up to pt = 20GeV/c and pt = 16GeV/c, respectively, are presented in this thesis. In addition, the particle rapidity densities at mid-rapidity and nuclear modification factors of K0 s and Λ(Λ̄) are shown and discussed. The analysis was performed using the Pb–Pb data set from 2010 and the pp data set from 2011. For the identification of K0 s and Λ(Λ̄), the on-the-fly V0 finder was employed on tracking information from the TPC and ITS detectors. The Λ and Λ̄ spectra were feed-down corrected using the measured published Ξ− spectra as input.
Regarding the rapidity density at mid-rapidity, a suppression of the strange particle production in pp as compared to Pb–Pb collisions is observed at all centralities, whereas the production per pion rapidity density stays constant as a function of dNch/dη including both systems. Furthermore, the relative increase of the individual particle species in pp and AA collisions is compatible for non- and single-strange particles when going from RHIC (√sNN = 0.2TeV) to LHC energies. On the other hand, in case of multi-strange baryons, a stronger increase in the particle production in pp is seen. The Λ̄ and Λ production in Pb–Pb and pp collisions was found to be equal. Concerning the nuclear modification factors, at lower pt (pt <5GeV/c), an enhancement of the RAA of Λ with respect to that of K0 s and charged hadrons is observed. This baryon-to-meson enhancement appearing in central Pb–Pb collisions at RHIC and LHC is currently explained by the interplay of the radial flow and recombination as the dominant particle production mechanism in this pt sector. The effect of radial flow is thus also seen in the low and intermediate pt region of RAA, where a mass hierarchy is discovered among the baryons and mesons, respectively, with the heaviest particle being least suppressed. When comparing the results from RHIC and LHC, the RCP is found to be similar at low-to-intermediate pt, while a significantly smaller RAA of K0 s and Λ in central and peripheral events at the LHC is observed in this pt region as compared to the RHIC results. This can be attributed to the larger radial flow in AA collisions and to the harder spectra at the LHC. At high pt (pt > 8GeV/c), a strong suppression in central Pb–Pb collisions with respect to pp collisions is found for K0 s and Λ(Λ̄). A significant high-pt suppression of these hadrons is also observed in the ratio of central-to-peripheral collisions. The nuclear modification of K0 s and Λ(Λ̄) is compatible with the modification of charged hadrons at
high pt. The calculations with the transport model BAMPS agree with these results suggesting a similar energy loss for all light quarks, i.e. u, d and s. Moreover, a compatible suppression for c-quarks appears in the ALICE measurements via the D meson RAA as well as in the BAMPS calculations, which hints to a flavour-independent suppression if light- and c-quarks are regarded. Within this consideration, no indication for a medium-modified fragmentation is found yet.
To summarize, for the particle production in Pb–Pb collisions at the LHC relative to pp neither at lower pt (rapidity density) nor at higher pt (nuclear modification factor) a significant difference of K0 s and Λ(Λ̄) carrying strangeness to hadrons made of u- and d-quarks was found.
Cytochrome c oxidase is the terminal enzyme in the respiratory chain of mitochondria and aerobic bacteria. This enzyme ultimately couples electron transfer from cytochrome c to an oxygen molecule with proton translocation across the inner mitochondrial and bacterial membrane. This reaction requires complicated chemical processes to occur at the catalytic site of the enzyme in coordination with proton translocation, the exact mechanism of which is not known at present. The mechanisms underlying oxygen activation, electron transfer and coupling of electron transfer to proton translocation are the main questions in the field of bioenergetics. The major goal of this work was to investigate the coupling of electron transfer and proton translocation in cytochrome c oxidase from Paracoccus denitrificans. Different theoretical approaches have been used to investigate the coupling of electron and proton transfer. This thesis presents an internal water prediction scheme in the enzyme and a molecular dynamics study of cytochrome c oxidase from Paracoccus denitrificans in the fully oxidized state, embedded in a fully hydrated dimyristoylphosphatidylcholine lipid bilayer membrane. Two parallel molecular dynamics simulations with different levels of protein hydration, 1.125 ns each in length, were carried out under conditions of constant temperature and pressure using three-dimensional periodic boundary conditions and full electrostatics to investigate the distribution and dynamics of water molecules and their corresponding hydrogen-bonded networks inside cytochrome c oxidase. The average number of solvent sites in the proton conducting K- and D- pathways was determined. The highly fluctuating hydrogen-bonded networks, combined with the significant diffusion of individual water molecules provide a basis for the transfer of protons in cytochrome c oxidase, therefore leading to a better understanding of the mechanism of proton pumping. The importance of the hydrogen bonding network and the possible coupling of local structural changes to larger scale changes in the cytochrome c oxidase during the catalytic cycle have been shown.
Cold target recoil ion momentum spectroscopy (COLTRIM) has been employed to image the momentum distributions of continuum electrons liberated in the impact of slow He2+ on He and H2. The distributions were measured for fully determined motion of the nuclei that is as a function of the impact parameter and in a well de ned scattering plane The single ionization (SI) of H2 leading to H2+ recoil ions in nondissociative states (He2+ + H+ -> He2+ + H+ + e-) and the transfer ionization (TI) of H2 leading to H2 dissociation into two free protons (He2+ H2 -> He+ + H+ + H+ + e-) were investigated. Similar measurements have been carried out for He target, the corresponding atomic two electron system, i.e. the single ionization (SI) (He2+ + He -> He+ + He2+ e- and the transfer ionization (TI) (He2+ + He -> He+ + He2+ + e-). These measurements have been exploited to understand the results obtained for H target. In comparing the continuum electron momentum distributions for H2 with that for He, a high degree of similarity is observed. In the case of transfer ionization of H2, the electron momentum distributions generated for parallel and perpendicular molecular orientations revealed no orientation dependence. The in scattering plane electron momentum distributions for the transfer ionization of H2 by He2+ and for the transfer ionization of He by He2e showed that the salient feature of these distributions for both collisions systems consists in the appearance of two groups of electrons with difeerent structures. In addition to the group of the saddle electrons forming two jets separated by a valley along the projectile axis we nd a new group of electrons moving with a velocity higher than the projectile velocity These new fast forward electrons result from a narrow range of impact parameters and appear as image saddle in the projectile frame. In contrast to the transfer ionization of He, the fast forward electrons group disappears in the in scattering plane electron momentum distribution generated for the single ionization of He. Instead of this group another new group of electrons appear These electrons exhibit an amount of backscattering These backward elec trons appear as image saddle in the target frame The structures that the saddle electrons show are owing to the quasi molecular nature of the collision process For the TI of H2, the TI of He and the SI of He, a pi-orbital shape of the electron momentum distribution is observed This indicates the importance of the rotational coupling 2-p-theta -> 2p-pi in the initial promotion of the ground state followed by further promotions to the continuum The backward electrons as well as the fast forward electrons are not discussed in the theoretical literature at all. However, a number of obvious indications of the existence of the backward and fast forward electrons could be seen in the experimental works of Abdallah et al. as well as in the theoretical calculations of Sidky et al One might speculate that electrons which are promoted on the saddle for some time during the collision could finally swing around the He+ ion in the way out of the collision, i.e. either around the projectile in the forward direction as in the TI case forming the fast forward electrons or around the recoil ion in the backward direction as in the SI case forming the backward electrons. This might be a result of the strong gradient, and hence the large acceleration of the screened He+ potential.
The topic of this thesis is the theoretical description of the hadron gas stages in heavy-ion collisions. The overall addressed question hereby is: How does the hadronic medium evolve i.e. what are the relevant microscopic reaction mechanisms and the properties of the involved degrees of freedom? The main goal is to address this question specifically for hadronic multi-particle interactions. For this goal, the hadronic transport approach SMASH is extended with stochastic rates, which allow to include detailed balance fulfilling multi-particle reactions in the approach. Three types of reactions are newly-accounted for: 3-to-1, 3-to-2 and 5-to-2 reactions. After extensive verifications of the stochastic rates approach, they are used to study the effect of multi-particle interactions, particularly in afterburner calculations.
These studies follow complementary results for the dilepton and strangeness production with only binary reactions, which show that hadronic transport approaches are capable of describing observables when employed for the entire evolution of low-energy heavy-ion collisions. This is illustrated by the agreement of dilepton and strangeness production for smaller systems with SMASH calculations. It is, in particular, possible to match the measured strangeness production of phi and Xi hadrons via additional heavy nucleon resonance decay channels. For larger systems or higher energies, hadronic transport cascade calculations with vacuum resonance properties can point to medium effects. This is demonstrated extensively for the dilepton emission in comparisons to the full set of HADES dielectron data. The dilepton invariant mass spectra are sensitive to a medium modification of the vector meson spectral function for large collision systems already at low beam energies. The sensitivity to medium modifications is mapped out in detail by comparisons to a coarse-graining approach, which employs medium-modified spectral functions and is based on the same evolution.
The theoretical foundation of stochastic rates are collision probabilities derived from the Boltzmann equation's collision term with the assumption of a constant matrix element. This derivation is presented in a comprehensive and pedagogical fashion. The derived collision probabilities are employed for a stochastic collision criterion and various detailed-balance fulfilling multi-particle reactions: the mesonic Dalitz decay back-reaction (3-to-1), the deuteron catalysis (3-to-2) and the proton-antiproton annihilation back-reaction (5-to-2). The introduced stochastic rates approach is extensively verified by studies of the numerical stability and comparisons to previous results and analytic expectations. The stochastic rates results agree perfectly with the respective analytic results.
Physically, multi-particle reactions are demonstrated to be significant for different observables, most notably the yield of the partaking particles, even in the late dilute stage of heavy-ion reactions. They lead to a faster equilibration of the system than equivalent binary multi-step treatments. The difference in equilibration consequently influences the yield in afterburner calculations. Interestingly, the interpretation of results is not dependent on employing multi-particle or multi-step treatments, which a posteriori validates the latter.
As the first test case of multi-particle reactions in heavy-ion reactions, the mesonic 3-to-1 Dalitz decay is found to be dominated by the omega Dalitz decay back-reaction. While the effect on the medium is found to be negligible overall, the regeneration is found to be sizable: up to a quarter of Dalitz decays are regenerated.
Non-equilibrium rescattering effects are shown to be relevant for late collision stages for two particle species: deuteron and protons. In both cases, the relevant rescatterings involve multiple particles.
The deuteron pion and nucleon catalysis reactions equilibrate quickly in the afterburner stage at intermediate energies. The constant formation and destruction keeps the yield constant and microscopically explains the "snowballs in hell"-paradox. The yield is also generated with no d present at early times, which explains why coalescence models can also match the multiplicity.
New is the study of the 5-body back-reaction of proton-antiproton annihilations. This work marks the first realization of microscopic 5-body reactions in a transport approach to fulfill detailed balance for such reactions. A sizable regeneration due to the back-reaction of up to half of the proton-antiproton pairs lost due to annihilations is found. Consequently, both annihilation and regeneration in the late non-equilibrium stage are shown to have a significant effect on the p yield.
Chiralität ist in der belebten Natur ein omnipräsentes Phänomen und beschreibt die Symmetrieeigenschaft eines Objektes, dass dieses von seinem Spiegelbild unterscheidbar ist. Die bisherigen Untersuchungen der Wechselwirkung zwischen chiralen Molekülen und Licht fokussieren sich auf das Regime der Ein- und Multiphoton-Ionisation und wird mit dieser Arbeit um Untersuchungen im Starkfeldregime erweitert. Im Rahmen dieser Arbeit wurden Experimente an einzelnen chiralen Molekülen in starken Laserfeldern vorbereitet, durchgeführt, analysiert und alle geladenen Fragmente in Koinzidenz untersucht.
Die Präsentation der Ergebnisse orientierte sich an der Reihenfolge, in der auch die Datenauswertung von Vielteilchenaufbrüchen vonstattengeht: Zunächst wurde der Dichroismus in den Photoionen (PICD) auf chirale Signale in integraler differentieller Form untersucht, dann wurde die Asymmetrien in den Elektronenverteilungen vorgestellt und abschließend die Zusammenhänge zwischen den Ionen- und Elektronenverteilungen aufgezeigt.
Kapitel 6 untersuchte die (differentielle) Ionisations- und Fragmentationswahrscheinlichkeit von verschiedenen chiralen Molekülen. Die in Kapitel 6.1 präsentierten Daten verknüpften erstmals den bereits in der Literatur diskutierten Zirkulardichroismus in den Zählraten von Photoionen (PICD) mit dem signalstärkeren differentiellen PICD in der Einfachionisation von Methyloxiran. Dissoziiert das Molekül nach der Ionisation rasch genug, gewährt der Impulsvektor des geladenen Fragments Zugang zu einer Fragmentationsachse. Durch die Auflösung nach einer Molekülachse ist der beobachtete PICD fast eine Größenordnung stärker, als der über alle Raumrichtungen integrierte.
In steigender Komplexität wurde in Kapitel 6.2 eine Fragmentation in vier Teilchen von Molekülen aus einem racemischen Gemisch von CHBrClF untersucht. Über die Auswertung eines Spatproduktes aus den Impulsvektoren konnte für jedes Molekül dessen Händigkeit bestimmt und der vollständig differentielle PICD untersucht werden. Durch das Festhalten einer Fragmentationsachse (analog zu Kapitel 6.1) konnten um einen Faktor vier stärkere PICD-Signale und durch das Auflösen nach der vollständigen Molekülorientierung die Signalstärke des PICD um einen Faktor von etwa 16 in den Bereich einiger Prozente gebracht werden. Leider übersteigt die theoretische Beschreibung dieses Prozesses den aktuellen Stand der Forschung weit. Daher kann nicht ausgeschlossen werden, dass nicht ein Beitrag zur PICD-Signalverstärkung auch aus der Dynamik der sequentiellen vielfachen Ionisation stammt.
Die untersuchte Reaktion in Kapitel 6.3 war der Fünf-Teilchenaufbruch der achiralen Ameisensäure. In der Messung aller ionischen Fragmente konnten analog zu dem vorherigen Kapitel die internen Koordinaten sowie die Orientierung des Moleküls ermittelt werden. Tatsächlich wurde von einer chiralen Fragmentation der achiralen Ameisensäure berichtet. Welches Enantiomer in der Fragmentation beobachtet wird, hängt maßgeblich von der Molekülorientierung relativ zum ionisierenden Laserpuls ab. Diese Erkenntnis könnte zu neuen Ansätzen für Laserkatalysierte enantioselektive Reaktionen führen. Darüber hinaus konnte gezeigt werden, dass die beobachtete Händigkeit des Moleküls nicht nur von seiner Orientierung, sondern auch von der Helizität des ionisierenden Laserpulses abhängt. Dieser differentielle PICD an der Ameisensäure zeigte sich neben einer sehr großen Signalstärke von über 20 % auch als sensitive Probe für die molekulare Struktur.
In Kapitel 7 wurden die Untersuchungen an den 3-dimensionalen Impulsverteilungen der Photoelektronen vorgestellt. Zunächst wird hierzu auf die allgemeine Form des Dichroismus in den Photoelektronen (PECD) im Starkfeldregime eingegangen und die vorherrschenden Symmetrien des Ionisationsregimes herausgearbeitet (Kapitel 7.1). Mit leicht steigender Komplexität konnte eine klare Verbindung zwischen der Asymmetrie in der Elektronenverteilung und dem Schicksal des zurückbleibenden molekularen Ions anhand der Einfachionisation von Methyloxiran herausgearbeitet werden (Kapitel 7.2). Dies hat eine wichtige Auswirkung auf die Nutzbarkeit des PECD im Starkfeldregime als Analysemethode für Chemie und Pharmazie: Der über alle Fragmentationskanäle integrierte PECD ist sensitiv auf die Gewichtung der Fragmente und damit auch auf beispielsweise die maximale Laserintensität. Die Daten legen nahe, dass die Abhängigkeit des PECD von dem Fragmentationskanal auf die unterschiedliche Auswahl von Subensembles molekularer Orientierungen zurückzuführen ist.
Bei Verwendung von elliptisch polarisiertem Licht treten gegenüber der zirkularen Polarisation eine Reihe neuer Effekte auf (Kapitel 7.3). Zunächst zeigt der PECD auch im Starkfeldregime eine nicht lineare Sensitivität auf den Polarisationszustand, welche sich auch als Funktion des Elektronentransversalimpulses und dem Fragmentationskanal ändert. Somit ist die Verwendung von elliptisch polarisiertem Licht bestens für die chirale Erkennung geeignet, wie inzwischen auch in der Literatur bestätigt wurde. Darüber hinaus führt die gebrochene Rotationssymmetrie bei elliptisch polarisiertem Licht zu einer Elektronenimpulsverteilung, welche selbst chiral ist: Der PECD variiert je nach Winkel φ in der Polarisationsebene, wobei die Extrema des PECD nicht mit den Maxima der Zählraten übereinstimmen. Als neue chirale Beobachtungsgröße konnten wir eine enantiosensitive und vorwärts-/rückwärtsasymmetrische Rotation der Zählratenmaxima einführen. Als abgeleitete Größe aus derselben drei-dimensionalen Elektronenverteilung ist diese Beobachtungsgröße jedoch untrennbar verknüpft mit dem ϕ-abhängigen PECD.
Kapitel 8 verknüpfte das (partielle) Wissen um die molekulare Orientierung und den PICD mit den Asymmetrien der Elektronenverteilung für die Messung der fünffach-Ionisation von Ameisensäure (Kapitel 8.1), der vierfach-Ionisation von CHBrClF (Kapitel 8.2) und der Einfachionisation von Methyloxiran (Kapitel 8.3). Im Datensatz der Ameisensäure und dem des CHBrClF zeigte die molekulare Orientierung einen größeren Einfluss auf die Asymmetrie in der Elektronenverteilung als das Enantiomer oder die Helizität des Lichtes. Diese Verknüpfung zwischen Molekülorientierung und Elektronenasymmetrie überträgt die Asymmetrien des PICD auf die Elektronenverteilung. Die Messung an Methyloxiran relativiert diesen Zusammenhang jedoch in dem dieser in dieser Stärke nur bei manchen Fragmentationskanälen auftritt. Offenbar ist die Übertragung der Asymmetrie der differentiellen Ionisationswahrscheinlichkeit nur einer der Mechanismen, welcher zu Elektronasymmetrien im Starkfeldregime führt.
High-energy astrophysics plays an increasingly important role in the understanding of our universe. On one hand, this is due to ground-breaking observations, like the gravitational-wave detections of the LIGO and Virgo network or the black-hole shadow observations of the EHT collaboration. On the other hand, the field of numerical relativity has reached a level of sophistication that allows for realistic simulations that include all four fundamental forces of nature. A prime example of how observations and theory complement each other can be seen in the studies following GW170817, the first detection of gravitational waves from a binary neutron-star merger. The same detection is also the chronological starting point of this Thesis. The plethora of information and constraints on nuclear physics derived from GW170817 in conjunction with theoretical computations will be presented in the first part of this Thesis. The second part goes beyond this detection and prepares for future observations when also the high-frequency postmerger signal will become detectable. Specifically, signatures of a quark-hadron phase transition are discussed and the specific case of a delayed phase transition is analyzed in detail. Finally, the third part of this Thesis focuses on the inclusion of radiative transport in numerical astrophysics. In the context of binary neutron-star mergers, radiation in the form of neutrinos is crucial for realistic long-term simulations. Two methods are introduced for treating radiation: the approximate state-of-the-art two-moment method (M1) and the recently developed radiative Lattice-Boltzmann method. The latter promises
to be more accurate than M1 at a comparable computational cost. Given that most methods for radiative transport or either inaccurate or unfeasible, the derivation of this new method represents a novel and possibly paradigm-changing contribution to an accurate inclusion of radiation in numerical astrophysics.