Refine
Year of publication
Document Type
- Article (2027)
- Preprint (1424)
- Doctoral Thesis (597)
- Conference Proceeding (249)
- diplomthesis (100)
- Bachelor Thesis (75)
- Master's Thesis (61)
- Contribution to a Periodical (46)
- Diploma Thesis (34)
- Book (33)
Keywords
- Kollisionen schwerer Ionen (47)
- heavy ion collisions (44)
- LHC (26)
- Quark-Gluon-Plasma (25)
- Heavy Ion Experiments (21)
- BESIII (19)
- equation of state (19)
- quark-gluon plasma (19)
- Relativistic heavy-ion collisions (18)
- QCD (16)
Institute
- Physik (4741) (remove)
Ziel dieser Arbeit war, die Reaktion von biologischen Gewebeproben auf dünn- und dicht-ionisierende Strahlung zu evaluieren. Dafür wurden die Gewebeproben konventioneller Röntgenstrahlung sowie einem ausgedehnten 12C-Ionen Bragg-Peak ausgesetzt. Zur Bestrahlung der biologischen Proben mit 12C wurde mit dem GSI-eigenen Simulationsprogramm TRiP98 ein Tiefendosisprofil eines ausgedehnten Bragg-peaks erstellt. Ein weiteres Ziel dieser Arbeit war, dieses Tiefendosisprofil mit drei anderen Simulationsprogrammen (ATIMA, MCHIT, TRIM) zu reproduzieren und zu vergleichen.
ATIMA und TRIM sind allgemeine Programme für den Energieverlust von Ionen in Materie. Sie können das von TRiP98 berechnetet Tiefendosisprofil nur ungenügend reproduzieren, da sie aufgrund fehlender Fragmentierung ein linear ansteigendes Tiefendosisprofil berechnen. Das Monte Carlo-Programm MCHIT, welches speziell für die Wechselwirkung von Ionen mit Materie in medizinischer Anwendung entwickelt wurde, zeigt die beste Übereinstimmung mit der TRiP98-Referenzkurve. Bis auf eine leicht höhere Durchschnittsdosis um 0.1 Gy konnte das Tiefendosisprofil nahezu exakt reproduziert werden.
Die biologischen Proben bestanden aus Schnittkulturen gesunder Maus-Lebern und Explantatkulturen gesunder Maus-Pankreata, um Nebenwirkungen ionisierender Strahlen abzuschätzen. Zusätzlich wurde die Reaktion auf 12C-Bestrahlung in neoplastischem Lebergewebe transgener c-myc/TGF-α Mäuse mit induzierbarem Lebertumor bestimmt. Um eine mögliche Tageszeitabhängigkeit der Gewebereaktion auf die Bestrahlung zu untersuchen, wurden die Schnitt- und Explantatkulturen zu zwei unterschiedlichen Tageszeiten präpariert: zur Mitte des subjektiven Tages und zur Mitte der subjektiven Nacht.
Die Präparate wurden für mehrere Tage auf einer Membran an einer Grenzschicht von Flüssigkeit und Luft kultiviert. Leber- und Pankreaskulturen gesunder C3H wildtyp Mäuse wurden mit einer Dosis von 2 Gy, 5 Gy oder 10 Gy Röntgenstrahlen bestrahlt. Leber- und Pankreaskulturen transgener Mäuse wurden mit ausgedehnten C-Ionen Bragg Peaks gleicher Dosen bestrahlt. Als Kontrolle dienten unbestrahlte Proben. Alle Proben wurden 1 h bzw. 24 h nach der Bestrahlung fixiert und immunhistochemisch auf Marker für Proliferation (Ki67), Apoptose (Caspase3) und DNA- Doppelstrangbrüche (γH2AX) untersucht.
Während die Pankreas-Präparate im Hinblick auf die untersuchten Parameter leider keine auswertbaren Ergebnisse ergaben, zeigten die untersuchten Parameter im gesunden Lebergewebe deutliche Tag-Nacht Unterschiede: die Proliferationsrate war zur Mitte des subjektiven Tages signifikant höher als zur Mitte der subjektiven Nacht. Umgekehrt waren die Raten für DNA-Doppelstrangbrüche zur Mitte der subjektiven Nacht signifikant erhöht. Diese Tag-Nacht Unterschiede ließen sich in neoplastischem Lebergewebe nicht nachweisen. Unabhängig von der Art und Dosis, hatte die Bestrahlung im gesunden Lebergewebe keinen Einfluss auf die untersuchten Parameter. In neoplastischem Lebergewebe hingegen wird die Rate an DNA-Doppelstrangbrüchen durch eine Bestrahlung dosisabhängig erhöht.
Die Auswirkungen ionisierender Strahlen auf das circadiane Uhrwerk wurden in Gewebeproben transgener Per2luc-Mäuse überprüft. Per2luc-Mäuse exprimieren das Enzym Luziferase unter der Kontrolle des Promoters von Per2, einem wichtigen Bestandteil des circadianen Uhrwerks. Daher erlaubt die Analyse dieser Tiere, den circadianen Rhythmus des molekularen Uhrwerks in Leber und anderen Geweben durch Messung der Luziferase-Aktivität in Echtzeit aufzuzeichnen. Wie in Leber- und Nebennierenkulturen dieser Tiere gezeigt werden konnte, führten ioniserende Strahlen dosisabhängig zu einem Phasenvorsprung des circadianen Uhrwerks.
Die Ergebnisse erlauben die Schlussfolgerung, dass ionisierende Strahlen das circadiane Uhrwerk verstellen, Proliferation und Apoptose in gesundem Lebergewebe jedoch kaum beeinflussen.
Das Ziel der vorliegenden Arbeit war, die systematischen Anfangsverluste im SIS18 zu minimieren. Das SIS18 soll als Injektor für das SIS100 in der neuen geplanten FAIR-Anlage eingesetzt werden und dafür die Strahlintensität erhöht werden. Eine wesentliche Rolle spielen das dynamische Vakuum im SIS18 und die anfänglichen Strahlverluste, verursacht durch Multiturn-Injektions- (MTI) oder HF-Einfangsverluste. Um den dynamischen Restgasdruck im SIS18 zu stabilisieren, müssen diese systematischen Anfangsverluste minimiert werden. Strahlteilchen, welche auf der Vakuumkammerwand verloren gehen, führen durch ionenstimulierte Desorption zu einem lokalen Druckanstieg. Dies wiederum erhöht die Wahrscheinlichkeit für Stöße zwischen Restgasteilchen und Strahlionen, wodurch diese umgeladen werden können und nach einem dispersiven Element (Dipol) auf der Vakuumkammer verloren gehen. Dies produziert einen weiteren lokalen Druckanstieg und verursacht eine massive Erhöhung der Umladungsraten. Eine Möglichkeit, die anfänglichen Verluste zu minimieren bzw. zu kontrollieren, ist die MTI-Verluste auf den Transferkanal (TK) zu verlagern, da dort ein Druckanstieg den umlaufenden Strahl im SIS18 nicht stört. Im Transferkanal werden die Strahlränder mit Hilfe von Schlitzen beschnitten und somit eine scharf definierte Phasenraumfläche erzeugt. ...
Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke.
Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen.
Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten.
Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden.
Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics.
Which are the factors underlying human information production on a global level? In order to gain an insight into this question we study a corpus of 252–633 mil. publicly available data files on the Internet corresponding to an overall storage volume of 284–675 Terabytes. Analyzing the file size distribution for several distinct data types we find indications that the neuropsychological capacity of the human brain to process and record information may constitute the dominant limiting factor for the overall growth of globally stored information, with real-world economic constraints having only a negligible influence. This supposition draws support from the observation that the files size distributions follow a power law for data without a time component, like images, and a log-normal distribution for multimedia files, for which time is a defining qualia.
Author summary: The generation of new information is limited by two key factors, by the incurring economic costs and by the capacity of the human brain to process and store data and information; the controlling agent needs to retain an overall understanding even when data is generated by semiautomatic processes. These processes are reflected in the statistical properties of the data files publicly available on the Internet. Collecting a corpus of 252–633 mil. files we find that the statistics of the file size distribution are consistent with the supposition that data production on a global level is shaped and limited by the neuropsychological information processing capacity of the brain, with economic and hardware constraints having a negligible influence.
Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice.
Part of Focus on High Energy Density Physics. In this paper, we present a novel theoretical approach, which allows the study of nonequilibrium dynamics of both electrons and atoms/ions within free-electron laser excited semiconductors at femtosecond time scales. The approach consists of the Monte-Carlo method treating photoabsorption, high-energy-electron and core-hole kinetics and relaxation processes. Low-energy electrons localized within the valence and conduction bands of the target are treated with a temperature equation, including source terms, defined by the exchange of energy and particles with high-energy electrons and atoms. We follow the atomic motion with the molecular dynamics method on the changing potential energy surface. The changes of the potential energy surface and of the electron band structure are calculated at each time step with the help of the tight-binding method. Such a combination of methods enables investigation of nonequilibrium structural changes within materials under extreme ultraviolet (XUV) femtosecond irradiation. Our analysis performed for diamond irradiated with an XUV femtosecond laser pulse predicts for the first time in this wavelength regime the nonthermal phase transition from diamond to graphite. Similar to the case of visible light irradiation, this transition takes place within a few tens of femtoseconds and is caused by changes of the interatomic potential induced by ultrafast electronic excitations. It thus occurs well before the heating stimulated by electron–phonon coupling starts to play a role. This allows us to conclude that this transition is nonthermal and represents a general mechanism of the response of solids to ultrafast electron excitations.
In non-hadronic axion models, which have a tree-level axion-electron interaction, the Sun produces a strong axion flux by bremsstrahlung, Compton scattering, and axiorecombination, the "BCA processes." Based on a new calculation of this flux, including for the first time axio-recombination, we derive limits on the axion-electron Yukawa coupling gae and axion-photon interaction strength ga using the CAST phase-I data (vacuum phase). For ma <~ 10 meV/c2 we find ga gae < 8.1 × 10−23 GeV−1 at 95% CL. We stress that a next-generation axion helioscope such as the proposed IAXO could push this sensitivity into a range beyond stellar energy-loss limits and test the hypothesis that white-dwarf cooling is dominated by axion emission.
Supersurface electron scattering, i.e., electron energy losses and associated deflections in vacuum above the surface of a medium, is shown to contribute significantly to electron spectra. We have obtained experimental verification (in absolute units) of theoretical predictions that the angular distribution of the supersurface backscattering probability exhibits strong oscillations which are anticorrelated with the generalized Ramsauer-Townsend minima in the backscattering probability. We have investigated 500-eV electron backscattering from an Au surface for an incidence angle of 70° and scattering angles between 37° and 165°. After removing the contribution of supersurface scattering from the experimental data, the resulting angular and energy distribution agrees with the Landau-Goudsmit-Saunderson (LGS) theory, which was proposed about 60 years ago, while the raw data are anticorrelated with LGS theory. This result implies that supersurface scattering is an essential phenomenon for quantitative understanding of electron spectra.
In the study of trapped two-component Bose gases, a widely used dynamical protocol is to start from the ground state of a one-component condensate and then switch half the atoms into another hyperfine state. The slightly different intra-component and inter-component interactions can then lead to highly non-trivial dynamics, especially in the density mismatch between the two components, commonly referred to as 'spin' density. We study and classify the possible subsequent dynamics, over a wide variety of parameters spanned by the trap strength and by the inter- to intra-component interaction ratio. A stability analysis suited to the trapped situation provides us with a framework to explain the various types of dynamics in different regimes.
Diese Dissertation stellt die systematische Einbeziehung von Eichkorrekturen in die Theorie der thermischen Leptogenese vor, welche eine Erklärung für die Frage nach dem Ursprung der Materie in unserem Universum bereitstellt.
Geht man vom weithin anerkannten Urknallmodell aus, so müsste hierbei zu gleichen Teilen Materie sowie Antimaterie entstanden sein. Aufgrund von Annihilationsprozessen sollte demnach die gesamte Materie zerstrahlt sein und ein leeres Universum zurückbleiben. Da dies aber nicht der Fall ist, stellt sich die Frage, wie das Ungleichgewicht zwischen Materie und Antimaterie entstehen konnte. Der Wert der Asymmetrie lässt sich mit Hilfe von Experimenten sehr genau bestimmen. Für eine systematische theoretische Beschreibung dieser Problematik stellte A. Sacharow drei Bedingungen auf: 1. die Verletzung der Baryonenzahl, 2. die Verletzung der Invarianz von Ladungskonjugation C sowie der Zusammensetzung von Ladungskonjugation und Parität CP sowie 3. eine Abweichung vom thermischen Gleichgewicht.
Da das Urknallmodell und das Standardmodell der Teilchenphysik nicht in der Lage sind, diese Asymmetrie zu beschreiben, beschäftigt sich die vorliegende Dissertation mit der Theorie der thermischen Leptogenese, welche statt von einer ursprünglichen Baryonenasymmetrie von einer Leptonenasymmetrie ausgeht. Zu einem späteren Zeitpunkt wird diese dann mittels Sphaleron-Prozesse, welche die Baryonenzahl verletzen, in eine Baryonenasymmetrie übertragen. Hierzu werden neue Teilchen zum Standardmodell hinzugefügt: schwere Majorana-Neutrinos. Diese zerfallen im thermischen Nichtgleichgewicht CP-verletzend in die bekannten Standardmodell-Leptonen und Higgs-Teilchen.
In dieser Arbeit wird eine hierarchische Anordnung der drei schweren Neutrinomassen betrachtet. Dies hat zur Folge, dass zwei der drei Majorana-Neutrinos ausintegriert werden können und eine effektive Theorie aufgestellt werden kann. Dieses Modell wird auch vanilla leptogenesis genannt und im Folgenden verwendet.
Die Dissertation ist wie folgt gegliedert. Die einleitenden Betrachtungen sind Gegenstand der Kapitel 1 und 2. Dort werden weiterhin andere Modelle zur Lösung des Problems der Baryonenasymmetrie kurz vorgestellt. Die thermische Leptogenese wird eingeführt und der See-saw-Mechanismus sowie die CP-Asymmetrie genauer beschrieben. Am Ende des Kapitels wird der klassische Ansatz für Leptogenese über Boltzmann Gleichungen präsentiert.
In Kapitel 3 werden die Grundlagen für Quantenfeldtheorien im Nichtgleichgewicht eingeführt. Die wichtigsten Definitionen im Falle des thermischen Gleichgewichts werden gegeben, anschließend findet sich die Verallgemeinerung auf Nichtgleichgewichtszustände. Die Bewegungsgleichungen, die sogenannten Kadanoff-Baym-Gleichungen, werden im Folgenden sowohl für skalare Teilchen als auch für Fermionen gelöst.
Kapitel 4 stellt die Notwendigkeit der Einbeziehung von Eichkorrekturen im Kontext der thermischen Leptogenese vor. Durch die Definition einer Leptonenzahlmatrix lässt sich die Asymmetrie durch die Kadanoff-Baym Gleichung für Leptonen umschreiben. Da der Vergleich von Boltzmann und Kadanoff-Baym Gleichungen im letzten Teil dieses Kapitels Unterschiede im Zeitverhalten zeigt, werden im Kadanoff-Baym Ansatz thermische Standardmodell-Breiten des Higgsfeldes und der Leptonen per Hand eingeführt. Mit dieser naiven Erweiterung erhält man ein gleiches Verhalten für die Leptonenzahlmatrix, lokal in der Zeit wie die Lösung der Boltzmann Gleichung. Eine systematische Einführung von Standardmodellkorrekturen für thermische Leptogenese ist daher unumgänglich, weshalb im Rahmen der vorliegenden Dissertation von Grund auf Eichkorrekturen der Diagramme, die zur Asymmetrie führen, berücksichtigt werden.
Die vier für diese Arbeit wichtigen Skalenbereich bedingen zwei Resummationsschemata, Hard Thermal Loop (HTL) und Collinear Thermal Loop (CTL), welche in Kapitel 5 vorgestellt werden. Dies führt schließlich auf zwei Differenzialgleichungen für die Berechnung der thermischen Produktionsrate des Majorana-Neutrinos, welche in Kapitel 6 numerisch weiter ausgewertet werden.
In Kapitel 7 erfolgt zunächst eine naive Berechnung aller eichkorrigierter 3-Schleifen-Diagramme, die zu den beiden die Asymmetrie verursachenden Diagrammen gehören. Da eine einfache Berechnung der 3-Schleifen-Diagramme nicht ausreicht, wird an dieser Stelle ein neues, zylindrisches Diagramm eingeführt, welches alle wichtigen Beiträge, insbesondere die HTL- und CTL-resummierten, enthält. Am Ende des Kapitels findet sich der erste geschlossene Ausdruck für die eichkorrigierte Leptonenzahlmatrix in führender Ordnung in allen Kopplungen.
Abschließend gibt es eine kurze Zusammenfassung und einen Ausblick in Kapitel 8. In dieser Dissertation findet sich zum ersten Mal ein systematischer Zugang zur Berücksichtigung aller Eichwechselwirkungen in der Theorie der thermischen Leptogenese. Ein geschlossener Ausdruck für die eichkorrigierte Leptonenasymmetrie konnte vorgestellt werden.
In der nuklearen Astrophysik sind Experimente mit hochgeladenen Radionukliden von großer Bedeutung. Diese exotischen Nuklide können in Schwerionenbeschleunigeranlagen hergestellt und in Speicherringen gespeichert werden. Momentan existieren weltweit zwei Anlagen, die solche Experimente ermöglichen: das GSI Helmholtzzentrum für Schwerionenforschung GmbH in Darmstadt und das Institut für moderne Physik (IMP) in Lanzhou, China. Da die Ausbeute dieser Nuklide gering ist, werden zerstörungsfreie Nachweismethoden in den Speicherringen verwendet. Diese machen von den Methoden der Spektralanalyse Gebrauch. Nicht nur die geringe Ausbeute, sondern auch die kurze Lebensdauer dieser Nuklide stellen hohe Anforderungen an die Sensitivität und Geschwindigkeit dieser Detektoren.
Eine übliche Methode ist die Verwendung kapazitiver Schottky-Sonden. Eine solche Sonde ist seit 1991 an der GSI im Speicherring ESR im Einsatz. Um die Empfindlichkeit zu erhöhen, kann man Mikrowellenkavitäten als resonante Pickups verwenden. Die von den Teilchen induzierten elektromagnetischen Felder können resonante Moden im Resonator anregen. Die Geometrie des Pickups und das verwendete Material spielen eine wesentliche Rolle in der Gestaltung der Feldbilder. Die resultierenden Signale, auch Schottky Signale genannt, werden mittels einer Antenne ausgekoppelt und anschliessend an einen Spektrumanalysator angeschlossen. Für die Analyse der gespeicherten Daten können verschiedene Methoden der Spektralschätzung wie z.B. das Multi-Taper angewendet werden. Nachdem eine externe Kalibrierung durchgeführt worden ist, kann das Pickup auch als ein Stromsensor verwendet werden.
Diese Arbeit befasst sich mit der Theorie, dem Aufbau und ersten Anwendungen eines neuen resonanten Pickups, das im Jahr 2010 in den Speicherring ESR eingebaut und in mehreren Experimenten erfolgreich eingesetzt wurde. Ein ähnliches Pickup wurde im Jahr 2011 in den CSRe im IMP Lanzhou eingebaut. Einzelne Schwerionen mit 400 MeV pro Nukleon wurden erfolgreich mit dem GSI-Pickup nachgewiesen. Das Pickup wird regelmässig in Speicherringexperimenten eingesetzt. Ähnliche Experimente sind für CSRe in Lanzhou geplant.
Heparin wird als gerinnungshemmendes Medikament in vielen Bereichen eingesetzt: in niedriger Dosierung wird es vor allem zur Thromboseprophylaxe verwendet, in höheren Konzentrationen kommt es zum Beispiel in der Hämodialyse oder bei herzchirurgischen Eingriffen unter Verwendung der Herz-Lungen-Maschine zum Einsatz, um ein Gerinnen des Patientenblutes zu verhindern. Obwohl Heparin schon seit vielen Jahrzehnten eingesetzt wird, fehlt bis heute eine Methode, mit der sich die Heparin-Konzentration einfach, schnell und kostengünstig während des OP-Verlaufs bestimmen lässt. Vielmehr wird der Zustand des Patientenblutes über Gerinnungsverfahren eingeschätzt, die nur indirekt abhängig von Heparin sind und die von vielen Parametern beeinflusst werden. Eine Überwachung des Heparinspiegels ist mit diesen Methoden nicht möglich. Ein weiteres Problem ergibt sich, wenn am Ende des Eingriffs die normale Blutgerinnung wiederhergestellt werden soll. Zu diesem Zweck wird Protamin verabreicht, welches das im Patientenblut zirkulierende Heparin binden und damit dessen gerinnungshemmende Wirkung neutralisieren soll. Die Verabreichung des Protamins geschieht jedoch nicht, wie es idealerweise wäre, entsprechend der aktuellen Heparin-Konzentration, da derzeit kein Heparin-Messverfahren existiert. Dies kann eine fehlerhafte Heparin-Neutralisierung zur Folge haben, welche mit weitreichenden Nebenwirkungen, vor allem einer erhöhten Blutungsgefahr, verbunden ist.
Aufgrund dieser Problematik wurde eine streulichtphotometrische Methode (LiSA-H) entwickelt, mit dem die Bestimmung der Heparin-Konzentration einer Patientenprobe während chirurgischen Eingriffen möglich ist. Diese basiert auf der Messung der Intensität des an Heparin-Protamin-Nanopartikeln gestreuten Lichts. Diese Nanopartikel bilden sich, sobald Protamin einer Lösung mit Heparin, z.B. heparinisiertes Blutplasma, zugegeben wird.
Mit Hilfe von analytischer Ultrazentrifugation sowie Rasterkraftmikroskop-Aufnahmen konnten die Größe und die Größenverteilung der Heparin-Protamin-Partikel charakterisiert werden. Beide Methoden zeigten gut übereinstimmende Ergebnisse und lieferten Partikeldurchmesser von etwa 70 – 200 nm.
Um den Prozess der Messung zu optimieren, wurde nach Filtrationsmethoden gesucht, um den zeit- und arbeitsaufwendigen Zentrifugationsschritt zu vermeiden. Dazu wurden Filtermembranen aus verschiedenen Materialien und mit unterschiedlichen Porengrößen getestet, die eine Plasmagewinnung durch Filtration von Vollblut ermöglichen sollten. Leider war dies mit den getesteten Filtersystemen nicht möglich. Dies bleibt jedoch ein aktuelles Thema und wird weiterhin untersucht werden.
Zusätzlich zu der streulichtbasierten Messmethode konnte gezeigt werden, dass über fluoreszenzspektroskopische Methoden die Bestimmung kleiner Heparin-Konzentrationen möglich ist. Dafür wurde Protaminsulfat mit Fluoreszenzfarbstoffen markiert und die Erniedrigung der Emissionsintensität des fluoreszierenden Protamins nach Zugabe von Heparin beobachtet. Aus dem Grad dieser Intensitätsabnahme lässt sich auf die Heparin-Konzentration schließen. Diese Methode wäre hervorragend dafür geeignet, das streulichtbasierte Verfahren zu ergänzen, das im niedrigen Konzentrationsbereich zunehmend unempfindlich wird. Hierfür müssen jedoch noch einige Messungen durchgeführt werden, um zu zeigen, ob eine Messung auch von Plasma- oder sogar Vollblutproben möglich ist.
Es wurde ein klinischer Prototyp entwickelt, der die Bestimmung der Heparin-Konzentration in einer Blutplasmaprobe während chirurgischer Eingriffe ermöglicht. Dabei wird eine LED mit einem Emissionsmaximum bei 627 nm verwendet und die Streulichtintensität zur Bestimmung der Anzahl und der Größe der Heparin-Protamin-Partikel genutzt. Die Steuerung der Messung sowie die Auswertung der Messdaten werden mit einem Netbook und eigens dafür neu entwickelter Software realisiert. Mit diesem Prototyp lässt sich reproduzierbar aus der Änderung der Streulichtintensität einer Blutplasmaprobe nach Protaminzugabe innerhalb weniger Minuten deren Heparin-Konzentration bestimmen. Es wurde eine Kalibrierfunktion erstellt, mit der es möglich ist, aus der Streulichtintensität die Heparin-Konzentration zu berechnen.
Eine erste Studie im Universitätsklinikum der Johann Wolfgang Goethe-Universität Frankfurt a.M., bei der bei 50 herzchirurgischen Eingriffen unter Verwendung der Herz-Lungen-Maschine parallel zur üblichen Gerinnungsmessung eine Heparin-Bestimmung mit dem neuen Heparin-Assay erfolgte, zeigte, dass es mit diesem Verfahren möglich ist, im OP-Verlauf die Heparin-Konzentration im Patientenblut zu ermitteln. Daraus konnten schließlich weitere Informationen wie die individuelle Geschwindigkeit des Heparin-Abbaus erhalten werden.
Eine zweite Studie in der Kinderkardiologie des Universitätsklinikums Gießen, deren Ergebnisse statistisch noch nicht vollständig ausgewertet sind, wurde ebenfalls mit Erfolg abgeschlossen. Die vorläufigen Ergebnisse zeigten hier, dass sich die Heparin-Abbaukinetik bei Erwachsenen und Kindern deutlich unterscheidet. Zudem zeigte sich, dass die gemessene Gerinnungszeit bei Kindern wesentlich schlechter (nur 30 % der Fälle) mit der gemessenen Heparin-Konzentration korreliert als bei Erwachsenen (etwa 70 % der Fälle).
Die Arbeit entstand im Rahmen des Förderprogramms ”Profil NT” und war Bestandteil des BMBF–Projektes ”NANOTHERM” (FKZ17PNT005). Dabei sollte die Möglichkeit der Integration und Verwendung von Nanodrähten als funktionsbestimmende Komponente im thermoelektrischen Sensorelement untersucht werden. Eine wichtige Aufgabe bestand darin die thermoelektrischen Eigenschaften der einzelnen Nanodrähte, insbesondere den Seebeck–Koeffizienten, zu untersuchen. Im Hinblick auf die weitere Entwicklung der Nanotechnologie ist es sehr wichtig, geeignete Messplattformen zu generieren und der Wissenschaftlichen Gemeinschaft zur Verfügung zu stellen für die Charakterisierung von Nanostrukturen. Für die Forschung bedeutet dies, dass man immer präziser die ”Physik im kleinen” studieren kann. Im Bezug auf die Anwendungen stellen die ausgeführten Untersuchungen eine wesentliche Basis für die Bauelemente–Optimierung und ihren späteren industriellen Einsatz dar.
In dieser Arbeit werden zwei Chipdesigns vorgestellt für die Bestimmung des Seebeck–Koeffizienten, die eine ausreichend hohe Temperaturdifferenz in Nanostrukturen erzeugen. Für beide Chips wird die mikromechanische Fertigung im einzelnen erläutert. Zusätzlich wurden die Chips in FEM–Simulationen analysiert. Eine messtechnische Charakterisierung der Chips bestätigt die Simulationen und die Funktionsweise der Chips für Untersuchungen des Seebeck–Koeffizienten an Nanostrukturen. Erstmals wurden Wolfram bzw. Platin FEBID–Deponate hinsichtlich des Seebeck–Koeffizienten untersucht. Für die Wolfram–Deponate ergab sich ein negativer Seebeck–Koeffizient. Der gemessenen Seebeck–Koeffizient war über mehrere Tage stabil. Als Ergebnis temperaturabhängiger Messungen des Seebeck–Koeffizienten konnte eine Wurzel-T Abhängigkeit beobachtet werden, die in der Theorie beschrieben wird.
Eine Untersuchung des Seebeck–Koeffizienten an Pt–FEBID–Deponaten zeigt einen Vorzeichenwechsel für Proben mit geringer elektrischer Leitfähigkeit (isolierender Charakter, schwache Kopplung). In der Literatur wird dieser Vorzeichenwechsel allerdings für Proben mit metallischer elektrischer Leitfähigkeit beschrieben. Aufgrund der Messergebnisse ist zu prüfen inwiefern die Theorie des Seebeck–Koeffizienten auf Proben mit schwacher Kopplung zu übertragen ist. Da die gemessenen Seebeck–Koeffizienten bei einigen nanoskaligen Proben sehr klein waren, wurde der Seebeck–Koeffizient des Kontaktmaterials in separaten Versuchen untersucht. Für das hier verwendete Schichtsystem Ti(40nm)/Au(120nm) kann ein Seebeck–Koeffizient von -0,22µV/K angegeben werden. Bei der Charakterisierung der Pt–FEBID–Deponaten wurde dieser Beitrag des Kontaktschichtsystems zur Thermospannung berücksichtigt.
Untersuchungen an BiTe–Nanodrähten mit dem Seebeck–Chip ergaben einen negativen Seebeck–Koeffizienten. Die ersten Untersuchungen wurden mit Kupfer als Kontaktmaterial durchgeführt, weil dieses sehr gute Lift–Off Eigenschaften besaß. Trotz der Kupferdiffusion in den Nanodraht hinein, wird der negative Seebeck–Koeffizient einem Tellur–Überschuss zugeschrieben, denn an Proben mit einer geeigneten Diffusionsbarriere war in nachfolgenden Untersuchungen ebenso ein negativer Seebeck–Koeffizient zu messen. Die ermittelten Beweglichkeiten sind niedriger als die von Bulkmaterial und können durch klassische Size–Effekte erklärt werden. Die gemessenen Ladungsträgerkonzentrationen liegen in typischen Bereichen für Halbmetalle. Die Charakterisierung des Seebeck–Koeffizienten mit Hilfe des hier vorgestellten Z–Chip ergab einen negativen Seebeck–Koeffizienten für die BiTe–Nanodrähte, die wie oben erläutert auf einen Tellur–Überschuss zurückzuführen sind. Eine Abschätzung eines mit Nanodrähten aufgebauten Sensors zeigt, dass im Vergleich zu konventionellen Dünnschicht–Thermopiles deutlich höhere Empfindlichkeiten zu erzielen sind. Erste technologische Konzepte für den Aufbau von Nanodraht–Arrays wurden erarbeitet und durch entsprechende Untersuchungen verifiziert.
Grundsätzlich ist der Z–Chip für die Charakterisierung aller drei Transportkoeffizienten geeignet und bietet die Option, anderen Arbeitsgruppen eine universelle thermoelektrische Messplattform zur Verfügung zu stellen.
The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM) can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM.
Detailed knowledge of reaction mechanisms is key to understanding chemical, biological, and biophysical processes. For many reasons, it is desirable to comprehend how a reaction proceeds and what influences the reaction rate and its products.
In biophysics, reaction mechanisms provide insight into enzyme and protein function, the reason why they are so efficient, and what determines their reaction rates. They also reveal the relationship between the function of a protein and its structure and dynamics.
In chemistry, reaction mechanisms are able to explain side products, solvent effects, and the stereochemistry of a product. They are also the basis for potentially optimizing reactions with respect to yield, enhancing the stereoselectivity, or for modifying reactions in order to obtain other related products.
A key step to investigate reaction mechanisms is the identification and characterization of intermediates, which may be reactive, short-lived, and therefore only weakly populated. Nowadays, the structures of those can in most cases only be hypothesized based on products, side products, and isolable intermediates, because intermediates with a life time of less than a few microseconds are not accessible with the commonly used techniques for structure determination such as X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy.
In this thesis, two-dimensional infrared (2D-IR) spectroscopy is shown to be a powerful complement to the existing techniques for structure determination in solution. 2D-IR spectroscopy uses a femtosecond laser setup to investigate interactions between vibrations - analogous to 2D-NMR, which investigates the interactions between spins. Its ultrafast time resolution makes 2D-IR spectroscopy particularly well suited for the two topics investigated in this thesis: Structure Determination of Reactive Intermediates and Conformational Dynamics of Proteins.
Structure Determination of Reactive Intermediates: The focus of this thesis is using polarization-dependent 2D-IR (P2D-IR) spectroscopy for structure determination of N-crotonyloxazolidinone (referred to as 1), a small organic compound with a chiral oxazolidinone, known as Evans auxiliary, and its reactive complexes with the Lewis acids SnCl4 and Mg(ClO4)2. Chiral oxazolidinones in combination with Lewis acids have frequently been used in stereoselective synthesis for over 30 years. Nevertheless, the detailed mechanisms are in many cases xvi ABSTRACT still mere hypotheses and have not yet been experimentally proven. By accurately measuring the angles between the transition dipole moments in the molecules using an optimized P2D-IR setup and comparing the results to DFT calculations, the conformation of 1 and the conformation and coordination of the main complexes with SnCl4 and Mg(ClO4)2 are unequivocally identified and analyzed in depth. Structural details, such as a slight twist in the solution structure of 1, are detected using P2D-IR spectroscopy; these cannot be inferred from NMR spectroscopy or DFT calculations. In addition to the main Lewis acid complexes, complexes in low concentration are detected and tentatively assigned to different conformations and complexation geometries. The knowledge of those structures is essential for rationalizing the observed stereoselectivities. Additionally, a method is introduced that enables structure determination of molecules in complex mixtures and even in the presence of molecules with similar spectral properties and in high concentration. This work sets the stage for future studies of other substrate-catalyst complexes and reaction intermediates for which the structure determination has not been possible to date.
Conformational Dynamics of Proteins: Exchange 2D-IR spectroscopy allows the investigation of fast dynamics without disturbing the equilibrium of the exchanging species. It is therefore well suited to investigate fast dynamics of proteins and to reveal the speed limit of those. The temperature dependence of the conformational dynamics between the myoglobin substates A1 and A3 in equilibrium is analyzed. The various substates of myoglobin can be detected with FTIR spectroscopy, if carbon monoxide is bound to the heme. From previous studies it is known that the exchange rates at room temperature are in the picosecond time range, well suited to be investigated by 2D-IR spectroscopy. In the temperature range between 0 °C and 40 °C only a weak temperature dependence of the exchange rate in the myoglobin mutant L29I is observed in the present study. The exchange rate approximately doubles from 15 ns-1 at 0 °C to 31 ns-1 at 40 °C. It turned out that the conformational dynamics correlates linearly with the solvent viscosity, which itself is temperature dependent. Comparing our results to measurements at cryogenic temperatures, the linear relation between exchange time constant for this process and the viscosity is shown for the temperature range between -100 °C and 40 °C (corresponding to a viscosity change of 14 orders of magnitude). Thus, it is proven that the dynamics of the conformational switching are mainly determined by solvent dynamics, i.e., the protein dynamics are slaved to the solvent dynamics. This is the first time slaving is observed for such fast processes (in the picosecond time range). The observation implies a long-range structural rearrangement between the myoglobin substates A1 and A3. In addition, the exchange for other mutants and wild type myoglobin is analyzed qualitatively and found to agree with the conclusions drawn from L29I myoglobin.
The human immunodeficiency virus (HIV) is currently ranked sixth in the worldwide causes of death [1]. One treatment approach is to inhibit reverse transcriptase (RT), an enzyme essential for reverse transcription of viral RNA into DNA before integration into the host genome [2]. By using non-nucleoside RT inhibitors (NNRTIs) [3], which target an allosteric binding site, major side effects can be evaded. Unfortunately, high genetic variability of HIV in combination with selection pressure introduced by drug treatment enables the virus to develop resistance against this drug class by developing point mutations. This situation necessitates treatment with alternative NNRTIs that target the particular RT mutants encountered in a patient.
Previously, proteochemometric approaches have demonstrated some success in predicting binding of particular NNRTIs to individual mutants; however a structurebased approach may help to further improve the predictive success of such models. Hence, our aim is to rationalize the experimental activity of known NNRTIs against a variety of RT mutants by combining molecular modeling, long-timescale atomistic molecular dynamics (MD) simulation sampling and ensemble docking. Initial control experiments on known inhibitor-RT mutant complexes using this protocol were successful, and the predictivity for further complexes is currently being evaluated. In addition to predictive power, MD simulations of multiple RT mutants are providing fundamental insight into the dynamics of the allosteric NNRTI binding site which is useful for the design of future inhibitors. Overall, work of this type is hoped to contribute to the development of predictive efficacy models for individual patients, and hence towards personalized HIV treatment options.
In dieser Arbeit wurde eine Messmethode entwickelt, die es ermöglicht, mittels Infrarotspektroskopie quantitative Aussagen über bestimmte Inhaltsstoffe in Körperflüssigkeiten zu machen. Hierfür wurden sowohl selektierte Blutplasma- und Vollblutproben gemessen als auch selektierte Urinproben. Die richtige Selektion des Probensatzes ist von großer Wichtigkeit, um für jede Komponente eine große, unabhängige Varianz der Absorptionswerte zu erhalten. Hierfür wurden sowohl physiologische als auch pathologische Proben in den Datensatz integriert. Um Referenzwerte für diese ausgewählten Proben zu erhalten, wurden konventionelle klinische Methoden verwendet. Grundsätzlich ist die Genauigkeit dieser Methode durch die Genauigkeit der jeweiligen Referenzmethode, also den konventionellen klinischen Methoden, beschränkt. Mit der neu entwickelten Methode besteht nun die Möglichkeit, die wichtigsten Parameter im Blut und Urin schnell, einfach und reagenzienfrei quantitativ zu bestimmen. Zusätzlich zu den in dieser Arbeit angegebenen Inhaltsstoffen ist es möglich, für weitere Komponenten oberhalb eines bestimmten Schwellenwerts quantitative Angaben zu machen. Hierbei könnten z.B. für Albumin oder Glukose im Urin pathologische Proben identifiziert werden und somit Rückschlüsse auf bestimmte Krankheitsbilder ermöglicht werden. ...
Low-level laser irradiation of visible light had been introduced as a medical treatment already more than 40 years ago, but its medical application still remains controversial. Laser stimulation of acupuncture points has also been introduced, and mast-cells degranulation has been suggested. Activation of TRPV ion channels may be involved in the degranulation. Here, we investigated whether TRPV1 could serve as candidate for laser-induced mast cell activation. Activation of TRPV1 by capsaicin resulted in degranulation. To investigate the effect of laser irradiation on TRPV1, we used the Xenopus oocyte as expression and model system. We show that TRPV1 can functionally be expressed in the oocyte by (a) activation by capsaicin (K 1/2 = 1.1 μM), (b) activation by temperatures exceeding 42°C, (c) activation by reduced pH (from 7.4 to 6.2), and (d) inhibition by ruthenium red. Red (637 nm) as well as blue (406 nm) light neither affected membrane currents in oocytes nor did it modulate capsaicin-induced current. In contrast, green laser light (532 nm) produced power-dependent activation of TRPV1. In conclusion, we could show that green light is effective at the cellular level to activate TRPV1. To which extend green light is of medical relevance needs further investigation.
Analgesia is a well-documented effect of acupuncture. A critical role in pain sensation plays the nervous system, including the GABAergic system and opioid receptor (OR) activation. Here we investigated regulation of GABA transporter GAT1 by δOR in rats and in Xenopus oocytes. Synaptosomes of brain from rats chronically exposed to opiates exhibited reduced GABA uptake, indicating that GABA transport might be regulated by opioid receptors. For further investigation we have expressed GAT1 of mouse brain together with mouse δOR and μOR in Xenopus oocytes. The function of GAT1 was analyzed in terms of Na(+)-dependent [(3)H]GABA uptake as well as GAT1-mediated currents. Coexpression of δOR led to reduced number of fully functional GAT1 transporters, reduced substrate translocation, and GAT1-mediated current. Activation of δOR further reduced the rate of GABA uptake as well as GAT1-mediated current. Coexpression of μOR, as well as μOR activation, affected neither the number of transporters, nor rate of GABA uptake, nor GAT1-mediated current. Inhibition of GAT1-mediated current by activation of δOR was confirmed in whole-cell patch-clamp experiments on rat brain slices of periaqueductal gray. We conclude that inhibition of GAT1 function will strengthen the inhibitory action of the GABAergic system and hence may contribute to acupuncture-induced analgesia.
The basic physics of nonrelativistic and electromagnetic ion stopping in hot and ionized plasma targets is thoroughly updated. Corresponding projectile-target interactions involve enhanced projectile ionization and coupling with target free electrons leading to significantly larger energy losses in hot targets when contrasted to their cold homologues. Standard stoppping formalism is framed around the most economical extrapolation of high velocity stopping in cold matter. Further elaborations pay attention to target electron coupling and nonlinearities due to enhanced projectile charge state, as well. Scaling rules are then used to optimize the enhanced stopping of MeV/amu ions in plasmas with electron linear densities nel ~ 10 18 -10 20 cm -2 . The synchronous firing of dense and strongly ionized plasmas with the time structure of bunched and energetic multicharged ion beam then allow to probe, for the first time, the long searched enhanced plasma stopping and projectile charge at target exit. Laser ablated plasmas (SPQR1) and dense linear plasma columns (SPQR2) show up as targets of choice in providing accurate and on line measurements of plasma parameters. Corresponding stopping results are of a central significance in asserting the validity of intense ion beam scenarios for driving thermonuclear pellets. Other applications of note feature thorium induced fission, novel ion sources and specific material processing through low energy ion beams. Last but not least, the given ion beam-plasma target interaction physics is likely to pave a way to the production and diagnostics of warm dense matter (WDM).
Fachspezifischer Anhang zur SPoL (Teil III): Studienfach Physik in den Studiengängen L2 und L5
(2008)
In this thesis, various aspects on the theoretical description of ultracold bosonic atoms in optical lattices are investigated. After giving a brief introduction to the fundamental concepts of BECs, atomic physics, interatomic interactions and experimental procedures in chapter (1), we derive the Bose-Hubbard model from first principles in chapter (2). In this chapter, we also introduce and discuss a technique to efficiently determine Wannier states, which, in contrast to current techniques, can also be extended to inhomogeneous systems. This technique is later extended to higher dimensional, non-separable lattices in chapter (5). The many-body physics and phases of the Bose-Hubbard is shortly presented in chapter (3) in conjunction with Gutzwiller mean-field theory, and the recently devised projection operator approach. We then return to the derivation of an improved microscopic many-body Hamiltonian, which contains higher band contributions in the presence of interactions in chapter (4). We then move on to many-particle theory. To demonstrate the conceptual relations required in the following chapter, we derive Bogoliubov theory in chapter (5.3.4) in three different ways and discuss the connections. Furthermore, this derivation goes beyond the usual version discussed in most textbooks and papers, as it accounts for the fact, that the quasi-particle Hamiltonian is not diagonalizable in the condensate and the eigenvectors have to be completed by additional vectors to form a basis. This leads to a qualitatively different quasi-particle Hamiltonian and more intricate transformation relations as a result. In the following two chapters (7, 8), we derive an extended quasi-particle theory, which goes beyond Bogoliubov theory and is not restricted to weak interactions or a large condensate fraction. This quasi-particle theory naturally contains additional modes, such as the amplitude mode in the strongly interacting condensate. Bragg spectroscopy, a momentum-resolved spectroscopic technique, is introduced and used for the first experimental detection of the amplitude mode at finite quasi-momentum in chapter (9). The closely related lattice modulation spectroscopy is discussed in chapter (10). The results of a time-dependent simulation agree with experimental data, suggesting that also the amplitude mode, and not the sound mode, was probed in these experiments. In chapter (11) the dynamics of strongly interacting bosons far from equilibrium in inhomogeneous potentials is explored. We introduce a procedure that, in conjunction with the collapse and revival of the condensate, can be used to create exotic condensates, while particularly focusing on the case of a quadratic trapping potential. Finally, in chapter (12), we turn towards the physics of disordered systems derive and discuss in detail the stochastic mean-field theory for the disordered Bose-Hubbard model.
The study of systems whose properties are governed by electronic correlations is a corner stone of modern solid-state physics. Often, such systems feature unique and distinct properties like Mott metal-insulator transitions, rich phase diagrams, and high sensitivity to subtle changes in the applied conditions. Whereas the standard approach to electronic structure calculations, density functional theory (DFT), is able to address the complexity of real-world materials but is known to have serious limitations in the description of correlations, the dynamical mean-field theory (DMFT) has become an established method for the treatment of correlated fermions, first on the level of minimal models and later in combination with DFT, termed LDA+DMFT.
This thesis presents theoretical calculations on different materials exhibiting correlated physics, where we aim at covering a range in terms of systems --from rather weakly correlated to strongy correlated-- as well as in terms of methods, from DFT calculations to combined LDA+DMFT calculations. We begin with a study on a selection of iron pnictides, a recently discovered family of high-temperature superconductors with varying degree of correlation strength, and show that their magnetic and optical properties can be assessed to some degree within DFT, despite the correlated nature of these systems. Next, extending our analysis to the inclusion of correlations in the framework of LDA+DMFT, we discuss the electronic structure of the iron pnictide LiFeAs which we find to be well described by Fermi liquid theory with regard to many of its properties, yet we see distinct changes in its Fermi surface upon inclusion of correlations. We continue the study of low-energy properties and specifically Fermi surfaces on two more iron pnictides, LaFePO and LiFeP, and predict a topology change of their Fermi surfaces due to the effect of correlations, with possible implications for their superconducting properties. In our last study, we close the circle by presenting LDA+DMFT calculations on an organic molecular crystal on the verge of a Mott metal-insulator transition; there, we find the spectral and optical properties to display signatures of strong electronic correlations beyond Fermi liquid theory.
With the increasing energies and intensities of heavy-ion accelerator facilities, the problem of an excessive activation of the accelerator components caused by beam losses becomes more and more important. Numerical experiments using Monte Carlo transport codes are performed in order to assess the levels of activation. The heavy-ion versions of the codes were released approximately a decade ago, therefore the verification is needed to be sure that they give reasonable results. Present work is focused on obtaining the experimental data on activation of the targets by heavy-ion beams. Several experiments were performed at GSI Helmholtzzentrum für Schwerionenforschung. The interaction of nitrogen, argon and uranium beams with aluminum targets, as well as interaction of nitrogen and argon beams with copper targets was studied. After the irradiation of the targets by different ion beams from the SIS18 synchrotron at GSI, the γ-spectroscopy analysis was done: the γ-spectra of the residual activity were measured, the radioactive nuclides were identified, their amount and depth distribution were detected. The obtained experimental results were compared with the results of the Monte Carlo simulations using FLUKA, MARS and SHIELD. The discrepancies and agreements between experiment and simulations are pointed out. The origin of discrepancies is discussed. Obtained results allow for a better verification of the Monte Carlo transport codes, and also provide information for their further development. The necessity of the activation studies for accelerator applications is discussed. The limits of applicability of the heavy-ion beam-loss criteria were studied using the FLUKA code. FLUKA-simulations were done to determine the most preferable from the radiation protection point of view materials for use in accelerator components.
Im Jahre 1871 wurde durch den Naturwissenschaftlichen Verein Osnabrück (gegründet 1870) eine meteorologische Station eingerichtet. Sie hatte ihren Standort am Sommerhaus des damaligen Obergerichtsrats JOHANN-VOLLRATH KETTLER,Osnabrück, Ziegelstraße 7. KETTLER hat 1872 im 1. Jahresbericht des . Naturwissenschaftlichen Vereins über die "Entstehung, Einrichtung und die ersten Ergebnisse" berichtet. Dieser Bericht ist hier wiedergegeben, legt er uns dar, daß alle Messungen exakt und gewissenhaft durchgeführt wurden.
In recent years, Hagedorn states have been used to explain the equilibrium and transport properties of a hadron gas close to the QCD critical temperature. These massive resonances are shown to lower h/s to near the AdS/CFT limit close to the phase transition. A comparison of the Hagedorn model to recent lattice results is made and it is found that the hadrons can reach chemical equilibrium almost immediately, well before the chemical freeze-out temperatures found in thermal fits for a hadron gas without Hagedorn states.
Direct photon emission in heavy-ion collisions is calculated within a relativistic micro+macro
hybrid model and compared to the microscopic transport model UrQMD. In the hybrid approach,
the high-density part of the collision is calculated by an ideal 3+1-dimensional hydrodynamic
calculation, while the early (pre-equilibrium-) and late (rescattering-) phase are calculated with
the transport model. Different scenarios of the transition from the macroscopic description to
the transport model description and their effects are studied. The calculations are compared to
measurements by the WA98-collaboration and predictions for the future CBM-experiment are
made.
We explore the shape and orientation of the freezeout region of non-central heavy ion collisions.
For this we fit the freezeout distribution with a tilted ellipsoid. The resulting tilt angle is compared
to the same tilt angle extracted via an azimuthally sensitive HBT analysis. This allows to access
the tilt angle experimentally, which is not possible directly from the freezeout distribution. We
also show a systematic study on the system decoupling time dependence on dNch/dh, using HBT
results from the UrQMD transport model. In this study we found that the decoupling time scales
with (dNch/dh)1/3 within each energy, but the scaling is broken across energies.
Euclidean strong coupling expansion of the partition function is applied to lattice Yang-Mills theory
at finite temperature, i.e. for lattices with a compactified temporal direction. The expansions
have a finite radius of convergence and thus are valid only for b <bc, where bc denotes the nearest
singularity of the free energy on the real axis. The accessible temperature range is thus the
confined regime up to the deconfinement transition. We have calculated the first few orders of
these expansions of the free energy density as well as the screening masses for the gauge groups
SU(2) and SU(3). The resulting free energy series can be summed up and corresponds to a glueball
gas of the lowest mass glueballs up to the calculated order. Our result can be used to fix
the lower integration constant for Monte Carlo calculations of the thermodynamic pressure via
the integral method, and shows from first principles that in the confined phase this constant is
indeed exponentially small. Similarly, our results also explain the weak temperature dependence
of glueball screening masses below Tc, as observed in Monte Carlo simulations. Possibilities and
difficulties in extracting bc from the series are discussed.
We report on the first steps of an ongoing project to add gauge observables and gauge corrections
to the well-studied strong coupling limit of staggered lattice QCD, which has been shown earlier
to be amenable to numerical simulations by the worm algorithm in the chiral limit and at finite
density. Here we show how to evaluate the expectation value of the Polyakov loop in the framework
of the strong coupling limit at finite temperature, allowing to study confinement properties
along with those of chiral symmetry breaking. We find the Polyakov loop to rise smoothly, thus
signalling deconfinement. The non-analytic nature of the chiral phase transition is reflected in the
derivative of the Polyakov loop. We also discuss how to construct an effective theory for non-zero
lattice coupling, which is valid to O(b).
Perturbation theory for non-abelian gauge theories at finite temperature is plagued by infrared
divergences which are caused by magnetic soft modes ~ g2T, corresponding to gluon fields of
a 3d Yang-Mills theory. While the divergences can be regulated by a dynamically generated
magnetic mass on that scale, the gauge coupling drops out of the effective expansion parameter
requiring summation of all loop orders for the calculation of observables. Some gauge invariant
possibilities to implement such infrared-safe resummations are reviewed. We use a scheme based
on the non-linear sigma model to estimate some of the contributions ~ g6 of the soft magnetic
modes to the QCD pressure through two loops. The NLO contribution amounts to ~ 10% of the
LO, suggestive of a reasonable convergence of the series.
The so-called sign problem of lattice QCD prohibits Monte Carlo simulations at finite baryon
density by means of importance sampling. Over the last few years, methods have been developed
which are able to circumvent this problem as long as the quark chemical potential is m=T <~1.
After a brief review of these methods, their application to a first principles determination of the
QCD phase diagram for small baryon densities is summarised. The location and curvature of the
pseudo-critical line of the quark hardon transition is under control and extrapolations to physical
quark masses and the continuum are feasible in the near future. No definite conclusions can as
yet be drawn regarding the existence of a critical end point, which turns out to be extremely quark
mass and cut-off sensitive. Investigations with different methods on coarse lattices show the lightmass
chiral phase transition to weaken when a chemical potential is switched on. If persisting on
finer lattices, this would imply that there is no chiral critical point or phase transition for physical
QCD. Any critical structure would then be related to physics other than chiral symmetry breaking.
The chiral critical surface is a surface of second order phase transitions bounding the region of
first order chiral phase transitions for small quark masses in the fmu;d;ms;mg parameter space.
The potential critical endpoint of the QCD (T;m)-phase diagram is widely expected to be part of
this surface. Since for m = 0 with physical quark masses QCD is known to exhibit an analytic
crossover, this expectation requires the region of chiral transitions to expand with m for a chiral
critical endpoint to exist. Instead, on coarse Nt = 4 lattices, we find the area of chiral transitions
to shrink with m, which excludes a chiral critical point for QCD at moderate chemical potentials
mB < 500 MeV. First results on finer Nt = 6 lattices indicate a curvature of the critical surface
consistent with zero and unchanged conclusions. We also comment on the interplay of phase
diagrams between the Nf = 2 and Nf = 2+1 theories and its consequences for physical QCD.
We report progress in our exploration of the finite-temperature phase structure of two-flavour lattice
QCD with twisted-mass Wilson fermions and a tree-level Symanzik-improved gauge action
for a temporal lattice size Nt = 8. Extending our investigations to a wider region of parameter
space we gain a global view of the rich phase structure. We identify the finite temperature transition/
crossover for a non-vanishing twisted-mass parameter in the neighbourhood of the zerotemperature
critical line at sufficiently high b . Our findings are consistent with Creutz’s conjecture
of a conical shape of the finite temperature transition surface. Comparing with NLO lattice
cPT we achieve an improved understanding of this shape.
Lattice simulations employing reweighting and Taylor expansion techniques have predicted a (m;T)-phase diagram according to general expectations, with an analytic quark-hadron crossover at m =0 turning into a first order transition at some critical chemical potential mE. By contrast, recent simulations using imgainary m followed by analytic continuation obtained a critical structure in the fmu;d;ms;T;mg parameter space favouring the absence of a critical point and first order line. I review the evidence for the latter scenario, arguing that the various raw data are not inconsistent with each other. Rather, the discrepancy appears when attempting to extract continuum results from the coarse (Nt =4) lattices simulated so far, and can be explained by cut-off effects. New (as yet unpublished) data are presented, which for Nf = 3 and on Nt = 4 confirm the scenario without a critical point. Moreover, simulations on finer Nt = 6 lattices show that even if there is a critical point, continuum extrapolation moves it to significantly larger values of mE than anticipated on coarse lattices.
We discuss the use of Wilson fermions with twisted mass for simulations of QCD thermodynamics.
As a prerequisite for a future analysis of the finite-temperature transition making use
of automatic O(a) improvement, we investigate the phase structure in the space spanned by the
hopping parameter k , the coupling b , and the twisted mass parameter m. We present results for
Nf = 2 degenerate quarks on a 163×8 lattice, for which we investigate the possibility of an Aoki
phase existing at strong coupling and vanishing m, as well as of a thermal phase transition at
moderate gauge couplings and non-vanishing m.
The QCD equation of state is not often discussed in cosmology. However, the relic density of
weakly interacting massive particles (WIMPs) depends on the entropy and the expansion rate of
the Universe when they freeze out, at a temperature in the range 400 MeV – 40GeV, where QCD
corrections are still important. We use recent analytic and lattice calculations of the QCD pressure
to produce a new equation of state suitable for use in relic density calculations. As an example,
we show that relic densities calculated by the dark matter package DarkSUSY receive corrections
of several per cent, within the observational accuracy of the Planck CMB mission, due for launch
in 2007.
I review recent developments in determining the QCD phase diagram by means of lattice simulations.
Since the invention of methods to side-step the sign problem a few years ago, a number
of additional variants have been proposed, and progress has been made towards understanding
some of the systematics involved. All available techniques agree on the transition temperature
as a function of density in the regime mq/T <~1. There are by now four calculations with signals
for a critical point, two of them at similar parameter values and with consistent results. However,
it also emerges that the location of the critical point is exceedingly quark mass sensitive. At the
same time sizeable finite volume, cut-off and step size effects have been uncovered, demanding
additional investigations with exact algorithms on larger and finer lattices before quantitative conclusions
can be drawn. Depending on the sign of these corrections, there is ample room for the
eventual phase diagram to look as expected or also quite different, with no critical point at all.
We review our knowledge of the phase diagram of QCD as a function of temperature, chemical potential and quark masses. The presence of tricritical lines at imaginary chemical potential m = i p 3 T, with known scaling behaviour in their vicinity, puts constraints on this phase diagram, especially in the case of two light flavors. We show first results in our project to determine the finite-temperature behaviour in the Nf = 2 chiral limit.
We discuss deviations from the exponential decay law which occur when going beyond the BreitWigner distribution for an unstable state. In particular, we concentrate on an oscillating behavior, remisiscent of the Rabi-oscillations, in the short-time region. We propose that these oscillations can explain the socalled GSI anomaly, which measured superimposed oscillations on top of the exponential law for hydrogen-like nuclides decaying via electron-capture. Moreover, we discuss the possibility that the deviations from the Breit-Wigner in the case of the GSI anomaly are (predominantely) caused by the interaction of the unstable state with the measurement apparatus. The consequences of this scenario, such as the non-observation of oscillations in an analogous experiment perfromed at Berkley, are investigated.
A lot of effort in lattice simulations over the last years has been devoted to studies of the QCD deconfinement transition. Most state-of-the-art simulations use rooted staggered fermions, while Wilson fermions are affected by large systematic uncertainties, such as coarse lattices or heavy sea quarks. Here we report on an ongoing study of the transition, using two degenerate flavours of nonperturbatively O(a) improved Wilson fermions. We start with Nt = 12 and 16 lattices and pion masses of 600 to 450 MeV, aiming at chiral and continuum limits with light quarks.
LatticeQCD using OpenCL
(2011)
We perform a detailed study of the adjoint static potential in the pseudoparticle approach, which is a model for SU(2) Yang-Mills theory. We find agreement with the Casimir scaling hypothesis and there is clear evidence for string breaking. At the same time the potential in the fundamental representation is linear for large separations. Our results are in qualitative agreement with results from lattice computations.
We present the status of runs performed in the twisted mass formalism with Nf =2+1+1 flavours of dynamical fermions: a degenerate light doublet and a mass split heavy doublet. The procedure for tuning to maximal twist will be described as well as the current status of the runs using both thin and stout links. Preliminary results for a few observables obtained on ensembles at maximal twist will be given. Finally, a reweighting procedure to tune to maximal twist will be described.
The pseudoparticle approach is a numericalmethod to compute path integrals without discretizing spacetime. The basic idea is to consider only those field configurations, which can be represented as a linear superposition of a small number of localized building blocks (pseudoparticles), and to replace the functional integration by an integration over the pseudoparticle degrees of freedom. In previous papers we have successfully applied the pseudoparticle approach to SU(2) Yang-Mills theory. In this work we discuss the inclusion of fermionic fields in the pseudoparticle approach. To test our method, we compute the phase diagram of the 1+1-dimensional Gross-Neveu model in the large-N limit as well as the chiral condensate in the crystal phase.
We present a numerical technique for calculating path integrals in non-compact U(1) and SU(2) gauge theories. The gauge fields are represented by a superposition of pseudoparticles of various types with their amplitudes and color orientations as degrees of freedom. Applied to Maxwell theory this technique results in a potential which is in excellent agreement with the Coulomb potential. For SU(2) Yang-Mills theory the same technique yields clear evidence of confinement. Varying the coupling constant exhibits the same scaling behavior for the string tension, the topological susceptibility and the critical temperature while their dimensionless ratios are similar to those obtained in lattice calculations.
We compute the static-light baryon spectrum with Nf = 2 flavors of sea quarks using Wilson twisted mass lattice QCD. As light valence quarks we consider quarks, which have the same mass as the sea quarks with corresponding pion masses in the range 340MeV<∼ mPS<∼ 525MeV, as well as partially quenched quarks, which have the mass of the physical s quark. We extract masses of states with isospin I = 0,1/2,1, with strangeness S = 0,−1,−2, with angular momentum of the light degrees of freedom j = 0,1 and with parity P = +,−. We present a preliminary extrapolation in the light u/d and an interpolation in the heavy b quark mass to the physical point and compare with available experimental results.
We present unambiguous evidence from lattice simulations of Nf = 3 QCD for two tricritical points in the (T;m) phase diagram at fixed imaginary m=T = ip=3 mod. 2p=3, one in the light and one in the heavy quark regime. Together with similar results in the literature for Nf = 2 this implies the existence of a chiral and of a deconfinement tricritical line at those values of imaginary chemical potentials. These tricritical lines represent the boundaries of the analytically continued chiral and deconfinement critical surfaces, respectively, which delimit the parameter space with first order phase transitions. It is demonstrated that the shape of the deconfinement critical surface is dictated by tricritical scaling and implies the weakening of the deconfinement transition with real chemical potential. A qualitatively similar effect holds for the chiral critical surface.
We perform a two-flavor dynamical lattice computation of the Isgur-Wise functions t1/2 and t3/2
at zero recoil in the static limit. We find t1/2(1) = 0.297(26) and t3/2(1) = 0.528(23) fulfilling
Uraltsev’s sum rule by around 80%. We also comment on a persistent conflict between theory and
experiment regarding semileptonic decays of B mesons into orbitally excited P wave D mesons,
the so-called “1/2 versus 3/2 puzzle”, and we discuss the relevance of lattice results in this
context.
We discuss the implementation and results of a recently developed microscopic method for calculating ion-ion interaction potentials and fusion cross-sections. The method uses the TDHF evolution to obtain the instantaneous many-body collective state using a density constraint. The ion-ion potential as well as the coordinate dependent mass are calculated from these states. The method fully accounts for the dynamical processes present in the TDHF time-evolution and provides a parameter-free way of calculating fusion cross-sections.
We study the implications on compact star properties of a soft nuclear equation of state determined from kaon production at subthreshold energies in heavy-ion collisions. On one hand, we apply these results to study radii and moments of inertia of light neutron stars. Heavy-ion data provides constraints on nuclear matter at densities relevant for those stars and, in particular, to the density dependence of the symmetry energy of nuclear matter. On the other hand, we derive a limit for the highest allowed neutron star mass of three solar masses. For that purpouse, we use the information on the nucleon potential obtained from the analysis of the heavy-ion data combined with causality on the nuclear equation of state.
We present and compare new types of algorithms for lattice QCD with staggered fermions in the limit of infinite gauge coupling. These algorithms are formulated on a discrete spatial lattice but with continuous Euclidean time. They make use of the exact Hamiltonian, with the inverse temperature beta as the only input parameter. This formulation turns out to be analogous to that of a quantum spin system. The sign problem is completely absent, at zero and non-zero baryon density. We compare the performance of a continuous-time worm algorithm and of a Stochastic Series Expansion algorithm (SSE), which operates on equivalence classes of time-ordered interactions. Finally, we apply the SSE algorithm to a first exploratory study of two-flavor strong coupling lattice QCD, which is manageable in the Hamiltonian formulation because the sign problem can be controlled.
It is widely believed that chiral symmetry is spontaneously broken at zero temperature in the strong coupling limit of staggered fermions, for any number of colors and flavors. Using Monte Carlo simulations, we show that this conventional wisdom, based on a mean-field analysis, is wrong. For sufficiently many fundamental flavors, chiral symmetry is restored via a bulk, first-order transition. This chirally symmetric phase appears to be analytically connected with the expected conformal window of manyflavor continuum QCD. We perform simulations in the chirally symmetric phase at zero quark mass for various system sizes L, and measure the torelon mass and the Dirac spectrum. We find that all observables scale with L, which is hence the only infrared length scale. Thus, the strong-coupling chirally restored phase appears as a convenient laboratory to study IR-conformality. Finally, we present a conjecture for the phase diagram of lattice QCD as a function of the bare coupling and the number of quark flavors.
We analyze the universal critical behavior at the chiral critical point in QCD with three degenerate quark masses. We confirm that this critical point lies in the universality class of the three dimensional Ising model. The symmetry of the Ising model, which is Z(2), is not directly realized in the QCD Hamiltonian. After making an ansatz for the magnetization- and energy-like operators as linear admixtures of the chiral condensate and the gluonic action, we determine several non-universal mixing and normalization constants. These parameters determine an unambiguous mapping of the critical behavior in QCD to that of the 3d-Ising model. We verify its validity by showing that the thus obtained orderparameter scales in accordance with the magnetic equation of state of the 3d-Ising model.
We explore the phase diagram of two flavour QCD at vanishing chemical potential using dynamical O(a)-improved Wilson quarks. In the approach to the chiral limit we use lattices with a temporal extent of Nt = 16 and spatial extent L = 32;48 and 64 to enable the extrapolation to the thermodynamic limit with small discretisation effects. In addition to an update on the scans at constant k, reported earlier, we present first results from scans along lines of constant physics at a pion mass of 290 MeV.We probe the transition using the Polyakov loop and the chiral condensate, as well as spectroscopic observables such as screening masses.
Pseudo-Critical Temperature and Thermal Equation of State from Nf = 2 Twisted Mass Lattice QCD
(2012)
We report about the current status of our ongoing study of the chiral limit of two-flavor QCD at finite temperature with twisted mass quarks. We estimate the pseudo-critical temperature Tc for three values of the pion mass in the range of mPS ~ 300 and 500 MeV and discuss different chiral scenarios. Furthermore, we present first preliminary results for the trace anomaly, pressure and energy density. We have studied several discretizations of Euclidean time up to Nt = 12 in order to assess the continuum limit of the trace anomaly. From its interpolation we evaluate the pressure and energy density employing the integral method. Here, we have focussed on two pion masses with mPS ~ 400 and 700 MeV.
We present a lattice QCD calculation of the heavy-light decay constants fB and fBs performed with Nf = 2 maximally twisted Wilson fermions, at four values of the lattice spacing. The decay constants have been also computed in the static limit and the results are used to interpolate the observables between the charmand the infinite-mass sectors, thus obtaining the value of the decay constants at the physical b quark mass. Our preliminary results are fB = 191(14)MeV, fBs = 243(14)MeV, fBs/ fB = 1.27(5). They are in good agreement with those obtained with a novel approach, recently proposed by our Collaboration (ETMC), based on the use of suitable ratios having an exactly known static limit.
We present first results from runs performed with Nf = 2+1+1 flavours of dynamical twisted mass fermions at maximal twist: a degenerate light doublet and a mass split heavy doublet. An overview of the input parameters and tuning status of our ensembles is given, together with a comparison with results obtained with Nf = 2 flavours. The problem of extracting the mass of the K- and D-mesons is discussed, and the tuning of the strange and charm quark masses examined. Finally we compare two methods of extracting the lattice spacings to check the consistency of our data and we present some first results of cPT fits in the light meson sector.
We analyze general convergence properties of the Taylor expansion of observables to finite chemical potential in the framework of an effective 2+1 flavor Polyakov-quark-meson model. To compute the required higher order coefficients a novel technique based on algorithmic differentiation has been developed. Results for thermodynamic observables as well as the phase structure obtained through the series expansion up to 24th order are compared to the full model solution at finite chemical potential. The available higher order coefficients also allow for resummations, e.g. Padé series, which improve the convergence behavior. In view of our results we discuss the prospects for locating the QCD phase boundary and a possible critical endpoint with the Taylor expansion method.
We present results of lattice QCD simulations with mass-degenerate up and down and mass-split strange and charm (Nf = 2+1+1) dynamical quarks using Wilson twisted mass fermions at maximal twist. The tuning of the strange and charm quark masses is performed at three values of the lattice spacing a ~ 0:06 fm, a ~ 0:08 fm and a ~ 0:09 fm with lattice sizes ranging from L ~ 1:9 fm to L ~ 3:9 fm. We perform a preliminary study of SU(2) chiral perturbation theory by combining our lattice data from these three values of the lattice spacing.
It is a long discussed issue whether light scalar mesons have sizeable four-quark components. We present an exploratory study of this question using Nf = 2+1+1 twisted mass lattice QCD. A mixed action approach ignoring disconnected contributions is used to calculate correlatormatrices consisting of mesonic molecule, diquark-antidiquark and two-meson interpolating operators with quantum numbers of the scalar mesons a0(980) (1(0++)) and k (1/2(0+)). The correlation matrices are analyzed by solving the generalized eigenvalue problem. The theoretically expected free two-particle scattering states are identified, while no additional low lying states are observed. We do not observe indications for bound four-quark states in the channels investigated.
The isospin, spin and parity dependent potential of a pair of static-light mesons is computed using Wilson twisted mass lattice QCD with two flavors of degenerate dynamical quarks. From the results a simple rule can be deduced stating, which isospin, spin and parity combinations correspond to attractive and which to repulsive forces.
A 5-gap timing RPC equipped with patterned electrodes coupled to both charge-sensitive and timing circuits yields a time accuracy of 77 ps along with a position accuracy of 38 μm. These results were obtained by calculating the straight-line fit residuals to the positions provided by a 3-layer telescope made out of identical detectors, detecting almost perpendicular cosmic-ray muons. The device may be useful for particle identification by time-of-flight, where simultaneous measurements of trajectory and time are necessary.
We investigate the implications of the r-modes instability on the composition of a compact star rotating at a sub-millisecond period. In particular, the only viable astrophysical scenario for such an object, wich might present inside the Low Mass X-ray Binary associated with the x-ray transient XTE J1739-285, is that it has a strangeness content. Since previous analysis indicate that hyperonic stars or stars containing a kaon condensate are unlikely because of the mass-shedding constraint, the only remaining possibility is that such an object is either a strange quark star or a hybrid quark-hadron star.
The QCD phase diagram as a function of temperature, T, and chemical potential for baryon
number, mB, is still unknown today, due to the sign problem, which prohibits direct Monte Carlo
simulations for non-vanishing baryon density. Investigations in models sharing chiral symmetry
with QCD predict a phase diagram, in which the transition corresponds to a smooth crossover at
zero density, but which is strengthened by chemical potential to turn into a first order transition
beyond some second order critical point. This contribution reviews the lattice evidence in favour
and against the existence of a critical point.
The possible role of a first order QCD phase transition at nonvanishing quark chemical potential and temperature for cold neutron stars and for supernovae is delineated. For cold neutron stars, we use the NJL model with nonvanishing color superconducting pairing gaps, which describes the phase transition to the 2SC and the CFL quark matter phases at high baryon densities. We demonstrate that these two phase transitions can both be present in the core of neutron stars and that they lead to the appearance of a third family of solution for compact stars. In particular, a core of CFL quark matter can be present in stable compact star configurations when slightly adjusting the vacuum pressure to the onset of the chiral phase transition from the hadronic model to the NJL model. We show that a strong first order phase transition can have strong impact on the dynamics of core collapse supernovae. If the QCD phase transition sets in shortly after the first bounce, a second outgoing shock wave can be generated which leads to an explosion. The presence of the QCD phase transition can be read off from the neutrino and antineutrino signal of the supernova.
QCD at finite temperature and denisty remains intractable by Monte Carlo simulations for quark
chemical potentials m >∼T. It has been a long standing problem to derive effective theories from
QCD which describe the phase structure of the former with controlled errors. We propose a
solution to this problem by a combination of analytical and numerical methods. Starting from
lattice QCD with in Wilson’s formulation, we derive an effective action in terms of Polyakov
loops by means of combined strong coupling and hopping expansions. The theory correctly
reflects the centre-symmetry in the pure gauge limit and its breaking through quarks. It is valid
for heavy quarks and lattices up to Nt ∼ 6. Its sign problem can be solved and we are able to
calculate the deconfinement transition of QCD with heavy quarks for all chemical potentials.
We discuss recent applications of the partonic pQCD based cascade model BAMPS with focus on heavy-ion phenomeneology in hard and soft momentum range. The nuclear modification factor as well as elliptic flow are calculated in BAMPS for RHIC end LHC energies. These observables are also discussed within the same framework for charm and bottom quarks. Contributing to the recent jet-quenching investigations we present first preliminary results on application of jet reconstruction algorithms in BAMPS. Finally, collective effects induced by jets are investigated: we demonstrate the development of Mach cones in ideal matter as well in the highly viscous regime.
The modern phase diagram of strongly interacting matter reveals a rich structure at high-densities
due to phase transitions related to the chiral symmetry of quantum chromodynamics (QCD) and
the phenomenon of color superconductivity. These exotic phases have a significant impact on
high-density astrophysics, such as the properties of neutron stars, and the evolution of astrophysical systems as proto-neutron stars, core-collapse supernovae and neutron star mergers. Most recent pulsar mass measurements and constraints on neutron star radii are critically discussed.
Astrophysical signals for exotic matter and phase transitions in high-density matter proposed recently in the literature are outlined. A strong first order phase transition leads to the emergence of a third family of compact stars besides white dwarfs and neutron stars. The different microphysics of quark matter results in an enhanced r-mode stability window for rotating compact stars compared to normal neutron stars. Future telescope and satellite data will be used to extract signals from phase transitions in dense matter in the heavens and will reveal properties of the phases of dense QCD. Spectral line profiles out of x-ray bursts will determine the mass-radius ratio of compact stars. Gravitational wave patterns from collapsing neutron stars or neutron star mergers will even be able to constrain the stiffness of the quark matter equation of state. Future astrophysical data can therefore provide a crucial cross-check to the exploration of the QCD phase diagram with the heavy-ion program of the CBM detector at the FAIR facility.
We extend the recently developed strong coupling, dimensionally reduced Polyakov-loop effective theory from finite-temperature pure Yang-Mills to include heavy fermions and nonzero chemical
potential by means of a hopping parameter expansion. Numerical simulation is employed to investigate the weakening of the deconfinement transition as a function of the quark mass. The
tractability of the sign problem in this model is exploited to locate the critical surface in the (M/T,m/T,T) space over the whole range of chemical potentials from zero up to infinity.
We present experimental results and theoretical simulations of the adsorption behavior of the metal–organic precursor Co2(CO)8 on SiO2 surfaces after application of two different pretreatment steps, namely by air plasma cleaning or a focused electron beam pre-irradiation. We observe a spontaneous dissociation of the precursor molecules as well as autodeposition of cobalt on the pretreated SiO2 surfaces. We also find that the differences in metal content and relative stability of these deposits depend on the pretreatment conditions of the substrate. Transport measurements of these deposits are also presented. We are led to assume that the degree of passivation of the SiO2 surface by hydroxyl groups is an important controlling factor in the dissociation process. Our calculations of various slab settings, using dispersion-corrected density functional theory, support this assumption. We observe physisorption of the precursor molecule on a fully hydroxylated SiO2 surface (untreated surface) and chemisorption on a partially hydroxylated SiO2 surface (pretreated surface) with a spontaneous dissociation of the precursor molecule. In view of these calculations, we discuss the origin of this dissociation and the subsequent autocatalysis.
The biological effects of energetic heavy ions are attracting increasing interest for their applications in cancer therapy and protection against space radiation. The cascade of events leading to cell death or late effects starts from stochastic energy deposition on the nanometer scale and the corresponding lesions in biological molecules, primarily DNA. We have developed experimental techniques to visualize DNA nanolesions induced by heavy ions. Nanolesions appear in cells as “streaks” which can be visualized by using different DNA repair markers. We have studied the kinetics of repair of these “streaks” also with respect to the chromatin conformation. Initial steps in the modeling of the energy deposition patterns at the micrometer and nanometer scale were made with MCHIT and TRAX models, respectively.
We present measurements of exclusive ensuremathπ+,0 and η production in pp reactions at 1.25GeV and 2.2GeV beam kinetic energy in hadron and dielectron channels. In the case of π+ and π0 , high-statistics invariant-mass and angular distributions are obtained within the HADES acceptance as well as acceptance-corrected distributions, which are compared to a resonance model. The sensitivity of the data to the yield and production angular distribution of Δ (1232) and higher-lying baryon resonances is shown, and an improved parameterization is proposed. The extracted cross-sections are of special interest in the case of pp → pp η , since controversial data exist at 2.0GeV; we find \ensuremathσ=0.142±0.022 mb. Using the dielectron channels, the π0 and η Dalitz decay signals are reconstructed with yields fully consistent with the hadronic channels. The electron invariant masses and acceptance-corrected helicity angle distributions are found in good agreement with model predictions.
Second-order dissipative hydrodynamic equations for each component of a multi-component system are derived using the entropy principle. Comparison of the solutions with kinetic transport results demonstrates validity of the obtained equations. We demonstrate how the shear viscosity of the total system can be calculated in terms of the involved cross-sections and partial densities. The presence of the inter-species interactions leads to a characteristic time dependence of the shear viscosity of the mixture, which also means that the shear viscosity of a mixture cannot be calculated using the Green-Kubo formalism the way it has been done recently. This finding is of interest for understanding of the shear viscosity of a quark-gluon plasma extracted from comparisons of hydrodynamic simulations with experimental results from RHIC and LHC.
We study the light scalar mesons a_0(980) and kappa using N_f = 2+1+1 flavor lattice QCD. In order to probe the internal structure of these scalar mesons, and in particular to identify, whether a sizeable tetraquark component is present, we use a large set of operators, including diquark-antidiquark, mesonic molecule and two-meson operators. The inclusion of disconnected diagrams, which are technically rather challenging, but which would allow us to extend our work to e.g. the f_0(980) meson, is introduced and discussed.
Electron beam-induced deposition with tungsten hexacarbonyl W(CO)6 as precursors leads to granular deposits with varying compositions of tungsten, carbon and oxygen. Depending on the deposition conditions, the deposits are insulating or metallic. We employ an evolutionary algorithm to predict the crystal structures starting from a series of chemical compositions that were determined experimentally. We show that this method leads to better structures than structural relaxation based on estimated initial structures. We approximate the expected amorphous structures by reasonably large unit cells that can accommodate local structural environments that resemble the true amorphous structure. Our predicted structures show an insulator-to-metal transition close to the experimental composition at which this transition is actually observed and they also allow comparison with experimental electron diffraction patterns.
A careful analysis of the magneto-transport properties of epitaxial nanostructured Nb thin films in the normal and the mixed state is performed. The nanopatterns were prepared by focused ion beam (FIB) milling. They provide a washboard-like pinning potential landscape for vortices in the mixed state and simultaneously cause a resistivity anisotropy in the normal state. Two matching magnetic fields for the vortex lattice with the underlying nanostructures have been observed. By applying these fields, the most likely pinning sites along which the flux lines move through the samples have been selected. By this, either the background isotropic pinning of the pristine film or the enhanced isotropic pinning originating from the nanoprocessing have been probed. Via an Arrhenius analysis of the resistivity data the pinning activation energies for three vortex lattice parameters have been quantified. The changes in the electrical transport and the pinning properties have been correlated with the results of the microstructural and topographical characterization of the FIB-patterned samples. Accordingly, along with the surface processing, FIB milling has been found to alter the material composition and the degree of disorder in as-grown films. The obtained results provide further insight into the pinning mechanisms at work in FIB-nanopatterned superconductors, e.g. for fluxonic applications.
In dieser Arbeit wurden Verfahren zur Identifikation hirnelektrischer Aktivität mit Zellularen Nichtlinearen Netzwerken (CNN), im Besonderen Reaktions-Diffusions-Netzwerken, entwickelt und untersucht. Mit Hilfe der eingeführten Methoden wurden Langzeitaufzeichnungen hirnelektrischer Aktivität bei Epilepsie analysiert und mittels eines automatisierten Verfahrens ermittelt, inwieweit sich mögliche Voranfallszustände vom anfallsfreien Zustand im statistischen Sinne trennen lassen.
Zunächst wurde ein Überblick über CNN gegeben und deren Beschreibung durch Systeme gekoppelter Differentialgleichungen dargestellt. Weiterhin wurden die Möglichkeiten der Informationsverarbeitung mit CNN durch Ausnutzung von Gleichgewichtszuständen oder der vollständigen raum-zeitlichen Dynamik der Netzwerke diskutiert. Zusätzlich wurde die Klasse der Reaktions-Diffusions-Netzwerke (RD-CNN) eingeführt. Für die Repräsentation der hierbei benötigten weitgehend allgemeinen nichtlinearen Zellkopplungsvorschriften wurden polynomiale Gewichtsfunktionen vorgeschlagen. Mit einer Darstellung der Theorie der Lokalen Aktivität wurden notwendige Bedingungen für emergentes Verhalten in RD-CNN angegeben. Die statistische Bewertung von Vorhersagemodellen wurde aus theoretischer Sicht beleuchtet. Mit der Receiver Operating Characteristic (ROC) wurde eine Analysemethode zur Beurteilung der Vorhersagekraft des zeitlichen Verlaufs von Kenngrößen bezüglich bevorstehender epileptischer Anfälle vorgestellt.
Als nächstes wurden Überlegungen zur numerischen Simulation von CNN und deren flexible und erweiterbare programmtechnische Umsetzung entwickelt. Die daraus resultierende und im Rahmen dieser Arbeit entstandene objektorientierte Simulationsumgebung FORCE++ wurde konzeptionell und im Hinblick auf die Softwarearchitektur vorgestellt.
Die Verfahren zur numerischen Simulation wurden auf die Problemstellung der Systemidentifikation mit CNN angewandt. Dazu wurden Netzwerke derart bestimmt, dass deren Zellausgangswerte entsprechende Signalwerte des beobachteten, zu identifizierenden Systems approximieren.
Da die Parameter der zu bestimmenden CNN im vorliegenden Fall der Untersuchung hirnelektrischer Aktivität nicht bekannt sind und nicht direkt abgeleitet werden können, wurden überwachte Lernverfahren zur Bestimmung der Netzwerke eingesetzt. Hierbei wurden Lernverfahren verschiedener Klassen für die Identifikation mit CNN mit polynomialen Gewichtsfunktionen untersucht. Die Leistungsfähigkeit des vorgestellten Identifikationsverfahrens wurde anhand bekannter Systeme einer genauen Betrachtung unterzogen. Dabei wurde festgestellt, dass die betrachteten Systeme mit hoher Genauigkeit durch CNN repräsentiert werden konnten. Exemplarisch wurde das Parametergebiet lokaler Aktivität für ein RD-CNN berechnet und durch numerische Simulationen die Ausbildung von Mustern innerhalb des Netzwerkes nachgewiesen.
Nach einem einleitenden Überblick über die medizinischen Hintergründe von Epilepsie und der Erfassung hirnelektrischer Aktivität wurde eine vergleichende Übersicht über den Stand veröffentlichter Studien zur Vorhersage epileptischer Anfälle gegeben. Für die Anwendung des hier vorgestellten Identifikationsverfahrens zur Analyse hirnelektrischer Aktivität wurde zunächst die Genauigkeit der Approximation kurzer, als quasi-stationär betrachteter Abschnitte, von EEG-Signalen untersucht. Durch gezielte Erhöhung der Komplexität herangezogener Netzwerke konnte hier die Genauigkeit der Repräsentation von EEG-Signalverläufen deutlich verbessert werden. Dabei wurde zudem die Verallgemeinerungsfähigkeit der ermittelten Netzwerke untersucht, wobei festgestellt wurde, dass auch solche Signalwerte mit guter Genauigkeit approximiert werden, die nicht im Identifikationsverfahren durch die überwachte Parameteroptimierung berücksichtigt waren. Um speziell den Einfluss der Information aus der Korrelation benachbarter Elektrodensignale zu untersuchen, wurde ein Verfahren zur multivariaten Prädiktion mit Discrete Time CNN (DT-CNN) entwickelt.
Hierbei werden durch ein CNN Signalwerte der betrachteten Elektrode aus vergangenen, korrelierten Signalwerten von Nachbarelektroden geschätzt. Für diese Aufgabenstellung konnte eine Methode zur Bestimmung der Netzwerkparameter im optimalen Sinn, alleine aus den statistischen Eigenschaften der Elektrodensignale angegeben werden. Dadurch gelang eine erhebliche Reduzierung der Rechenkomplexität, die eine umfangreiche Untersuchung intrakranieller Langzeitableitungen ermöglichte.
Zur Analyse von Langzeitaufzeichnungen mit dem RD-CNN Identifikationsverfahren, wurden die numerischen Berechnungen zur Simulation von CNN mit FORCE++ auf einem durchsatz-orientierten Hochleistungs-Rechnernetzwerk durchgeführt. Mit den so gewonnen Ergebnissen konnten vergleichende Analysen vorgenommen werden. Zudem wurden Untersuchungen zum Vorliegen lokaler Aktivität in den ermittelten RD-CNN durchgeführt.
Die bei den beschriebenen Verfahren extrahierten Kenngrößen hirnelektrischer Aktivität wurden durch ein automatisiertes Verfahren auf ihre Vorhersagekraft für epileptische Anfälle bewertet. Dabei wurde untersucht, inwieweit der anfallsfreie Zustand und ein angenommener Voranfallszustand durch die jeweils betrachtete Kenngröße im statistischen Sinn diskriminiert werden kann. Durch parallele Analysen mit Anfallszeitsurrogaten wurden hierzu ergänzende Signifikanztests durchgeführt.
Nach Auswertung von mehrtägigen Hirnstromsignalen verschiedener Patienten konnte festgestellt werden, dass mit den in dieser Arbeit entwickelten Verfahren Kenngrößen hirnelektrischer Aktivität bestimmt werden konnten, welche offenbar die Identifikation potentieller Voranfallszustände ermöglichen.
Auch wenn für eine breite medizinische Anwendung die Spezifität und Sensitivität noch weiter verbessert werden muss, so können doch die erzielten Ergebnisse einen wesentlichen Schritt hin zu einer implantierbaren, CNN-basierten Plattform zur Erkennung und Verhinderung epileptischer Anfälle darstellen. Die Berechnungen für das Identifikationsverfahren mit RD-CNN könnten dabei durch zukünftige, spezialisierte schaltungstechnische Realisierungen für mehrschichtige CNN mit polynomialen Gewichtsfunktionen eine erhebliche Beschleunigung erfahren.
A new era in experimental nuclear physics has begun with the start-up of the Large Hadron Collider at CERN and its dedicated heavy-ion detector system ALICE. Measuring the highest energy density ever produced in nucleus-nucleus collisions, the detector has been designed to study the properties of the created hot and dense medium, assumed to be a Quark-Gluon Plasma.
Comprised of 18 high granularity sub-detectors, ALICE delivers data from a few million electronic channels of proton-proton and heavy-ion collisions.
The produced data volume can reach up to 26 GByte/s for central Pb–Pb
collisions at design luminosity of L = 1027 cm−2 s−1 , challenging not only the data storage, but also the physics analysis. A High-Level Trigger (HLT) has been built and commissioned to reduce that amount of data to a storable value prior to archiving with the means of data filtering and compression without the loss of physics information. Implemented as a large high performance compute cluster, the HLT is able to perform a full reconstruction of all events at the time of data-taking, which allows to trigger, based on the information of a complete event. Rare physics probes, with high transverse momentum, can be identified and selected to enhance the overall physics reach of the experiment.
The commissioning of the HLT is at the center of this thesis. Being deeply embedded in the ALICE data path and, therefore, interfacing all other ALICE subsystems, this commissioning imposed not only a major challenge, but also a massive coordination effort, which was completed with the first proton-proton collisions reconstructed by the HLT. Furthermore, this thesis is completed with the study and implementation of on-line high transverse momentum triggers.
8th International Conference on Nuclear Physics at Storage Rings Stori11, October 9-14, 2011 Laboratori Nazionale di Frascati, Italy.
Storage rings offer the possibility of measuring proton- and alpha-induced reactions in inverse kinematics. The combination of this approachwith a radioactive beamfacility allows, in principle, the determination of the respective cross sections for radioactive isotopes. Such data are highly desired for a better understanding of astrophysical nucleosynthesis processes like the p-process. A pioneering experiment has been performed at the Experimental Storage Ring (ESR) at GSI using a stable 96Ru beam at 9-11 AMeV and a hydrogen target. Monte-Carlo simulations of the experiment were made using the Geant4 code. In these simulations, the experimental setup is described in detail and all reaction channels can be investigated. Based on the Geant4 simulations, a prediction of the shape of different spectral components can be performed. A comparison of simulated predictions with the experimental results shows a good agreement and allows the extraction of the cross section.
The development of a non- destructive measurement method for ion beam parameters has been treated in various projects. Although results are promising, the high complexity of beam dynamics has made it impossible to implement a real time process control up to now. In this paper we will propose analysing methods based on the dynamics of Cellular Nonlinear Networks (CNN) that can be implemented on pixel parallel CNN based architectures and yield satisfying results even at low resolutions.
After five years of running at RHIC, and on the eve of the LHC heavy-ion program, we highlight the status of femtoscopic measurements. We emphasize the role interferometry plays in addressing fundamental questions about the state of matter created in such collisions, and present an enumerated list of measurements, analyses and calculations that are needed to advance the field in the coming years.
Abrasion-ablation models and the empirical EPAX parametrization of projectile fragmentation are described. Their cross section predictions are compared to recent data of the fragmentation of secondary beams of neutron-rich, unstable 19,20,21O isotopes at beam energies near 600 MeV/nucleon as well as data for stable 17,18O beams.
The TATA Box Binding Protein (TBP) is a 20 kD protein that is essential and universally conserved in eucarya and archaea. Especially among archaea, organisms can be found that live below 0°C as well as organisms that grow above 100°C. The archaeal TBPs show a high sequence identity and a similar structure consisting of α-helices and β-sheets that are arranged in a saddle-shape 2-symmetric fold. In previous studies, we have characterized the thermal stability of thermophilic and mesophilic archaeal TBPs by infrared spectroscopy and showed the correlation between the transition temperature (Tm) and the optimal growth temperature (OGT) of the respective donor organism. In this study, a “new” mutant TBP has been constructed, produced, purified and analyzed for a deeper understanding of the molecular mechanisms of thermoadaptation. The β-sheet part of the mutant consists of the TBP from Methanothermobacter thermoautotrophicus (OGT 65°C, MtTBP65) whose α-helices have been exchanged by those of Methanosarcina mazei (OGT 37°C, MmTBP37). The Hybrid-TBP irreversibly aggregates after thermal unfolding just like MmTBP37 and MtTBP65, but the Tm lies between that of MmTBP37 and MtTBP65 indicating that the interaction between the α-helical and β-sheet part of the TBP is crucial for the thermal stability. The temperature stability is probably encoded in the variable α-helices that interact with the highly conserved and DNA binding β-sheets.