Refine
Year of publication
- 2009 (67) (remove)
Document Type
- Doctoral Thesis (31)
- Article (10)
- Conference Proceeding (7)
- Diploma Thesis (5)
- diplomthesis (5)
- Bachelor Thesis (4)
- Book (2)
- Part of Periodical (2)
- Review (1)
Has Fulltext
- yes (67)
Is part of the Bibliography
- no (67)
Keywords
- Theoretische Physik (3)
- Chirale Symmetrie (2)
- Quark-Gluon-Plasma (2)
- Relativistische Hydrodynamik (2)
- Schwerionenstoß (2)
- Strahldynamik (2)
- relativistic hydrodynamics (2)
- ATR-Spektroskopie (1)
- Abstandsinformation (1)
- Akupunktur (1)
Institute
- Physik (67) (remove)
In this thesis, we studied the single impurity Anderson model and developed a new and fast impurity solver for the dynamical mean field theory (DMFT). Using this new impurity solver, we studied the Hubbard model and periodic Anderson model for various parameters. This work is motivated by the fact that the dynamical mean field theory is widely used for the studies of strongly correlated systems, and the most frequently used methods, e.g. the quantum Monte-Carlo method (QMC), and the exact digonalization method are much CPU time consuming and usually limited by the available computers. Therefore, a fast and reliable impurity solver is needed. This new impurity solver was explored based on the equation-of-motion method (also called Green's function and decoupling method in some literature). Using the retarded Green's function, we first derived the equations of motion of Green's functions. Then, we employed a decoupling scheme to close the equations. By solving self-consistently the obtained closed set of integral equations, we obtained the single particle Green's function for the single impurity Anderson model. After that, the single impurity Anderson model was solved along with self-consistency conditions within the framework of DMFT. In this work, we studied and compared two decoupling schemes. Moreover, we also derived possible higher order approximations which will be tested in future work. Besides the theoretical work, we tested the method in numerical calculations. The integral equations are first solved by iterative methods with linear mixing and Broyden mixing, respectively. However, these two methods are not sufficient for finding the self-consistent solutions of the DMFT equations because converged results are difficult to obtain. Moreover, the computing speed of the two methods is also not satisfactory. Especially the iterative method with linear mixing costs always a lot of CPU time due to the required small mixing. Hence, we developed a new method, which is a combination of genetic algorithm and iterative method. This new method converges very fast and removes artifacts appearing in the results from the iterative method with linear and Broyden mixing. It can directly operate on the real axis, where no numerical error from the high frequency tail corrections and the analytical continuation is introduced. In addition, our new technique strongly improves the precision of the numerical results by removing the broadening. With this newly developed impurity solver and numerical technique, we studied the single impurity Anderson model, the single band Hubbard model and the periodic Anderson model with arbitrary spin and orbital degeneracy N on the real axis. For the single impurity Anderson model, the spectral functions are calculated for the infinite and finite Coulomb interaction strength. We also studied the spectral functions in dependence of the parameters of impurity position and hybridization. For the Hubbard model, we studied the bandwidth control and filling control Mott metal-insulator transition for spin and orbital degeneracy N = 2. It gives qualitatively the critical value of Coulomb interaction strength for the Mott metal-insulator transition, and the spectral functions which are comparable to those obtained in QMC and numerical renormalization group methods. We also studied the quasiparticle weight and the self-energy in metallic states. The latter shows almost Fermi liquid behavior. At last we calculated the densities of states for the Hubbard model with arbitrary spin and orbital degeneracy N. The periodic Anderson model (PAM) is also studied as another important lattice model. It was solved for various combinations of parameters: the Coulomb interaction strength, the impurity position, the center position of the conduction band, the hybridization, the spin and orbital degeneracy. The PAM results represents the physics of impurities in a metal. In short, our method works for the Hubbard model and the periodic Anderson model in a large range of parameters, and gives good results. Therefore, our impurity solver could be very useful in calculations within LDA+DMFT. Finally, we also made a preliminary investigation of the multi-band system based on the success in single band case. We first studied the two-band system in a simplified treatment by neglecting the interaction between the two bands through the bath. This has given promising numerical results for the two-band Hubbard model. Moreover, we have studied theoretically the two-band system with mean field approximation and Hubbard-I approximation in dealing with the higher order cross Green's functions which are related to both the two bands. In the mean field approximation, we even generalized the two-band system to arbitrary M=N/2 band system. Potential improvement can be carried out on the basis of this work.
The current thesis is devoted to a systematic study of fluctuations and correlations in heavy-ion collisions, which might be considered as probes for the phase transition and the critical point in the phase diagram, within the Hadron-String- Dynamics (HSD) microscopic transport approach. This is a powerful tool to study nucleus-nucleus collisions and allows to completely simulate experimental collisions on an event-by-event basis. Thus, the transport model has been used to study fluctuations and correlations including the influence of experimental acceptance as well as centrality, system size and collision energy. The comparison to experimental data can separate the effects induced by a phase transition since there is no phase transition in the HSD version used here. Firstly the centrality dependence of multiplicity fluctuations has been studied. Different centrality selections have been performed in the analysis in correspondence to the experimental situation. For the fixed target experiment NA49 events with fixed numbers of the projectile participants have been studied while in the collider experiment PHENIX centrality classes of events have been defined by the multiplicity in certain phase space region. A decrease of participant number fluctuations (and thus volume fluctuations) in more central collisions for both experiments has been obtained. Another area of this work addresses to transport model calculations of multiplicity fluctuations in nucleus-nucleus collisions as a function of colliding energy and system size. This study is in full correspondence to the experimental program of the NA61 Collaboration at the SPS. Central C+C, S+S, In+In, and Pb+Pb nuclear collisions at Elab = 10, 20, 30, 40, 80, 158 AGeV have been investigated. The expected enhanced fluctuations - attributed to the critical point and phase transition - can be observed experimentally on top of a monotonic and smooth ‘hadronic background’. These findings should be helpful for the optimal choice of collision systems and collision energies for the experimental search of the QCD critical point. Other observables are fluctuations of ratios of hadrons (e.g. pions, kaons, protons, etc.) which are not so much affected by volume fluctuations. In particular HSD results for the kaon-to-pion ratio fluctuations, which has been regarded as promising observable for a long time, are presented from low SPS energies up to high energies at RHIC. In addition to the HSD calculations statistical model is also used in terms of microcanonical, canonical and grand canonical ensembles. Further a study of the system size event-by-event fluctuations causing rapidity forward-backward correlations in relativistic heavy-ion collisions is presented. The HSD simulations reveal strong forward-backward correlations and reproduce the main qualitative features of the STAR data in A+A collisions at RHIC energies. It has been shown that strong forward-backward correlations arise due to an averaging over many different events that belong to one centrality bin. An optimization of the experimental selection of centrality classes is presented, which is relevant for the program of the NA61 collaboration at CERN, the low-energy program at RHIC, as well as future experiments at FAIR.
Kaon and pion production in centrality selected minimum bias Pb+Pb collisions at 40 and 158A GeV
(2009)
Results on charged kaon and negatively charged pion production and spectra for centrality selected Pb+Pb mininimum bias events at 40 and 158A GeV have been presented in this thesis. All analysis are based on data taken by the NA49 experiment at the accelerator Super Proton Synchrotron (SPS) at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland. The kaon results are based on an analysis of the mean energy loss <dE/dx> of the charged particles traversing the detector gas of the time projection chambers (TPCs). The pion results are from an analysis of all negatively charged particles h- corrected for contributions from particle decays and secondary interactions. For the dE/dx analysis of charged kaons, main TPC tracks with a total momentum between 4 and 50 GeV have been analyzed in logarithmic momentum log(p) and transverse momentum pt bins. The resulting dE/dx spectra have been fitted by the sum of 5 Gaussians, one for each main particle type (electrons, pions, kaons, protons, deuterons). The amplitude of the Gaussian used for the kaon part of the spectra has been corrected for efficiency and acceptance and the binning has been transformed to rapidity y and transverse momentum pt bins. The multiplicity dN/dy of the single rapidity bins has been derived by summing the measured range of the transverse momentum spectra and an extrapolation to full coverage with a single exponential function fitted to the measured range. The results have been combined with the mid-rapidity measurements from the time-of-flight detectors and a double Gaussian fit to the dN/dy spectra has been used for extrapolation to rapidity outside of the acceptance of the dE/dx analysis. For the h- analysis of negatively charged pions, all negatively charged tracks have been analyzed. The background from secondary reactions, particle decays, and gamma-conversions has been corrected with the VENUS event generator. The results were also corrected for efficiency and acceptance and the pt spectra were analyzed and extrapolated where necessary to derive the mean yield per rapidity bin dN/dy. The mean multiplicity <pi-> has been derived by summing up the measured dN/dy and extrapolating the rapidity spectrum with a double Gaussian fit to 4pi coverage. The results have been discussed in detail and compared to various model calculations. Microscopical models like URQMD and HSD do not describe the full complexity of Pb+Pb collisions. Especially the production of the positively charged kaons, which carry the major part of strange quarks, cannot be consistently reproduced by the model calculations. Centrality selected minimum bias Pb+Pb collisions can be described as a mixture of a high-density region of multiply colliding nucleons (core) and practically independent nucleon-nucleon collisions (corona). This leads to a smooth evolution from peripheral to central collisions. A more detailed approach derives the ensemble volume from a percolation of elementary clusters. In the percolation model all clusters are formed from coalescing strings that are assumed to decay statistically with the volume dependence of canonical strangeness suppression. The percolation model describes the measured data for top SPS and RHIC energies. At 40A GeV, the system size dependence of the relative strangeness production starts to evolve from the saturation seen at higher energies from peripheral events onwards towards a linear dependence at SIS and AGS. This change of the dependence on system size occurs in the energy region of the observed maximum of the K+ to pi ratio for central Pb+Pb collisions. Future measurements with heavy ion beam energies around this maximum at RHIC and FAIR as well as the upgraded NA49 successor experiment NA61 will further improve our understanding of quark matter and its reflection in modern heavy ion physics and theories.
In dieser Arbeit wurden zwei Systeme der biologischen Energiewandlung mit verschiedenen spektroskopischen Methoden untersucht und es wurden neue Erkenntnisse über die Funktion und Aktivierung der Proteine Proteorhodopsin und RuBisCO gewonnen. Zusätzlich konnte eine neue methodische Herangehensweise zur Untersuchung von Carboxylierungsreaktionen etabliert werden. Dieser Ansatz bietet in Zukunft breite Anwendungsmöglichkeiten zur Studie dieser biologisch so bedeutenden Reaktionsklasse. Mit Hilfe der Infrarotspektroskopie und vor allem durch den Einsatz von Tieftemperaturmessungen konnte der bisher kontrovers diskutierte Photozyklus von Proteorhodopsin (PR) eingehend charakterisiert werden. Jenseits des gut verstandenen aktiven Transports bei pH 9,0 wurde vor allem der pH 5,1 Photozyklus untersucht. Erstmals konnte auch in Infrarotspektren das M-Intermediat bei pH 5,1 nachgewiesen werden. Dieses Intermediat ist von entscheidender Bedeutung für den aktiven Transport über die Zellmembran und seine Existenz wurde bisher vielfach angezweifelt. Zudem konnte Glu-108 als ein möglicher Protonenakzeptor des Photozyklus bei pH 5,1 identifiziert werden. Durch einen pH-Indikator ließ sich der Nachweis erbringen, dass auch im sauren pH-Bereich Protonen freigesetzt werden. Damit steht fest, dass ein aktiver Protonentransport bei pH 5,1 möglich ist. Zusammen mit Informationen zu protonierbaren Aminosäureseitenketten (vornehmlich Asp und Glu) lässt sich zudem mit Einschränkungen die These unterstützen, dass PR ober- und unterhalb des pKa-Werts von Asp-97 in verschiedene Richtungen Protonen pumpt. Damit ergibt sich ein differenziertes Bild für den pH-abhängigen Photozyklus von PR mit drei pH-Bereichen (pH 9,0, 8,5 bis 5,5 und 5,1) in denen PR unterschiedliche Protonentransportwege zeigt. Als weiteres biologischen System wurde RuBisCO genauer untersucht. Im Fokus der Arbeit war dabei die Aktivierung durch die Bildung eines Lysin-Carbamats im aktiven Zentrum. Obwohl RuBisCO das am häufigsten vorkommende Enzym unseres Planeten ist, in der Kohlenstofffixierung eine bedeutende Rolle spielt und obwohl mehrere Dutzend Kristallstrukturen existieren, gibt es noch immer genügend offene Fragen zur Aktivierung. Mit Hilfe eines Käfig-CO2 konnte die Carbamatbildung im Enzym direkt verfolgt und der Einfluss von Magnesiumionen auf die Aktivierung beobachtet werden. Damit ließ sich ganz klar ausschließen, dass Magnesium bereits für die Carbamatbildung erforderlich ist. Die Koordination von Mg2+ ist erst für die Endiol-Bildung im weiteren Reaktionszyklus essentiell. Zusätzlich wurde gezeigt, dass Azid eine Inhibierung des Enzyms durch die Konkurrenz mit CO2 um die Bindungsstelle auslöst, allerdings verdrängt CO2 das Azidion im Laufe der Zeit. Mit den Ergebnissen für RuBisCO konnte klar gezeigt werden, dass die Kombination aus Käfig-CO2 und Rapid-Scan IR-Spektroskopie ein völlig neues Feld für die Untersuchung von Carboxylierungsreaktionen eröffnet. Gerade die offenen Fragen zu Biotin bindenden Carboxylasen bieten ein breites Anwendungsgebiet für diese Methodik.
Inhaltsverzeichnis 1. Einleitung …………………………………………………………………...3 1.1 Erklärungsversuche und Forschungsergebnisse der Gegenwart ……8 1.2 Zielrichtung und Abgrenzung der aktuellen Arbeit ………………..21 1.3 Intention und Erläuterung der Versuchsreihen ………………….....25 2. Grundlagen und Methodiken bezüglich des subjektiven visuellen Wahrnehmungsraums …………………………………………………........27 2.1 Die nativistische und die empiristische Anschauung ………………27 2.2 Räumliche Anordnungen der wahrgenommenen Objekte …………31 2.3 Über die visuell vermittelte Richtungs- und Lagebestimmung …....33 2.4 Visuelle Auswertungen der korrespondierenden Netzhautstellen …42 2.5 Visuelle Auswertungen der disparaten Netzhautstellen …………...44 2.6 Die Größenkonstanzleistung ………………………………………47 2.7 Psychophysikalische Grundlagen und Schwellenwerte …………...50 2.8 Physiologische Grundlagen ………………………………………..54 3. Experimentelle Untersuchung ……………………………………………..60 3.1 Versuchsaufbau und Ablauf zur Durchführung der Experimente …60 3.1.1 Zusammensetzungen der Versuchsteilnehmer ……………66 3.1.2 Erläuterungen und Ablauf der 2 Versuchsreihen …………66 3.2 Graphische Darstellungen der Messergebnisse ……………………71 3.2.1 I.Versuchsreihe ……………………………………………71 3.2.2 II.Versuchsreihe …………………………………………...93 3.3 Auswertung und Aufbereitung der Messdaten …………………..102 3.3.1 Auswertungen der I.Versuchsreihe ……………………..102 3.3.2 Auswertungen der II.Versuchsreihe …………………….120 3.3.3 Fehlerbetrachtungen der Versuchsreihen I und II ………122 3.4 Diskussion der Messdaten ……………………………………….124 4. Zusammenfassung und Ausblick ………………………………………...135 Begriffsverzeichnis mit kurzer Erklärung.…………………………………...137 Literaturverzeichnis …………………………………………………….........141 Bildquellenverzeichnis ………………………………………………….......143 Als Fazit kann man folgendes zusammenfassend sagen: Die aufgestellte Arbeitshypothese wurde durch die beiden Versuchsreihen verifiziert, denn die Ergebnisse ergaben folgendes: - In den Messreihen der Versuchsreihe I ist jeweils ein Anstieg der eingestellten Größe, je mehr Abstandsinformationen zugelassen wurden, zu verzeichnen. Das bedeutet, der Anstieg wurde umso größer, desto größer die AID wurde. Auch waren in allen Messreihen die monokularen Größeneinstellungen, bei sonst konstanter AID, gegenüber der binokularen Größeneinstellung geringer. Bei Verringerung der Einstellentfernung wurden die Abweichungen zwischen den subjektiven und den objektiven Größen ebenfalls größer. Das heißt also, die subjektive visuelle Wahrnehmungsgröße ist von der AID wie folgt abhängig: Das visuelle System bewertet subjektiv die Wahrnehmungsgröße bei maximaler AID nach oben und relativ dazu, bei minimaler AID nach unten. - Dass die aufgestellten Parameter die AID bedingen, konnte durch die 1. Messreihe gezeigt werden, da der jeweilige Anstieg der eingestellten Größe, nur durch die Variation eines Parameters erfolgte. Die Querdisparation konnte aber hier nicht als Parameter der die AID bedingt isoliert untersucht werden. Bei den meisten Probanden ergaben sich sehr schnell Doppelbilder und erzeugten bei ihnen ein Unbehagen. Aber dennoch floss dieser Parameter als einflussnehmende Größe in den Konvergenzgrad mit ein. Das Netzhautbild konnte nur kombiniert mit dem psychologischen Gefühl der Nähe isoliert betrachtet werden. Damit die Voraussetzungen in beiden Versuchen gleich waren, wurde in der Versuchsreihe II unter gleichen Versuchsbedingungen wie in der Versuchsreihe I gemessen. Auch hier wurden die Abstandsinformationen von minimal bis maximal sukzessive zugelassen. Durch die Messdaten der Versuchsreihe II konnte eindeutig gezeigt werden, dass die Abstandsunterschiedsschwelle umso geringer ausfällt, desto mehr Abstandskriterien hinzukamen, also die AID erhöht wurde. Analog kehren sich die Verhältnisse um, wenn die AID erniedrigt wird. Durch diesen kausalen Zusammenhang zwischen der Abstandsunterschiedsschwelle des visuellen Systems und der Güte der AID bestätigt sich zusätzlich die Annahme, dass die eingeführten Parameter des Abstandes tatsächlich als solche zu betrachten sind und die AID konstituieren. Denn wären sie keine Konstituenten der AID, so müssten die Unterschiedsschwellen der Versuchsreihe II in etwa gleich sein. Da aber die Änderung der Randbedingungen sich auf die verwertbaren Abstandsinformationen bezogen und somit die AID jeweils geändert wurde, ist die aufgestellte Annahme über die Parameter, welche die AID bedingen, berechtigt. - Dass im orthostereoskopischen Bereich die subjektiven Größeneinstellungen gegenüber der Zentralprojektion am weitesten auseinander liegen, bestätigte sich durch alle Messreihen der Versuchsreihe I. In diesem Bereich existiert die maximale Unabhängigkeit der visuellen Wahrnehmungsgröße vom Gesichtswinkel. In diesem Bereich liegt eine sehr hohe Güte in der Größenkonstanzleistung des visuellen Systems vor. Dass die Größenkonstanz qualitativ dem aufgestellten Formalismus aus Annahme 2 genügt und die aufgestellte qualitative Relation sie beschreibt, konnte nicht gezeigt werden. Das begründet sich durch das Zustandekommen der Größenkonstanz. Sie resultiert bekanntlich aus einer Entfernungsänderung. Je nach dem, ob sich ein Objekt dem Beobachter nähert oder entfernt, setzt diese Bildgrößenkompensation ein. Von daher unterliegt sie einem dynamischen Prozess und kann dadurch mit Relation (2´) nicht beschreiben werden. - Mit der Relation 2´ kann man qualitativ die Unbestimmtheit in der visuellen Wahrnehmungsentfernung beschreiben und qualitativ erklären. Der Aspekt der Abstandsunterschiedsschwelle ist etwas verwirrend. Auf der einen Seite handelt es sich um eine Vermögensleistung des visuellen Systems, welches abhängig ist von den vorliegenden Abstandsinformationen, die ihrerseits die AID bedingen. Auf der anderen Seite bedingt die Abstandsunterschiedsschwelle die AID durch ihre Güte und Qualität, beeinflusst also umgekehrt auch die AID. In der Versuchsreihe 2 wurde auf die Vermögensleistung des visuellen Systems und deren Abhängigkeit von den Parametern eingegangen, die auch die AID bedingen. Dies diente dazu, zusätzlich zu zeigen, dass es sich bei diesen Parametern um Parameter handelt, welche die AID bedingen. Die Argumentationskette lautete wie folgt: Die Abstandsunterschiedsschwelle beeinflusst die AID. Die betrachteten Parameter beeinflussten die Abstandsunterschiedsschwelle, dass experimentell verifiziert wurde. Daraus folgte dann, dass eben diese Parameter auch die AID bedingen. Diese Argumentation diente nur als zusätzliches Hilfsmittel. Bei Punkt 4 sollte die Abstandsunterschiedsschwelle und ihr Einfluss auf die Unbestimmtheit hin betrachtet werden. Dies hat aber nur sekundäre Relevanz, da hier die Anwendung der Relation 2´ im Vordergrund stand. - Ob die Fitting-Funktion, welche die Messdaten der Versuchsreihe I approximierte, sich als Algorithmus für die Darstellung einer Bewegungssimulation eignet, kann noch nicht gesagt werden. Es müssen noch Untersuchungen umgesetzt werden, welche die Diagonalbewegung beschreiben. In der stirnfrontalen Vor- und Zurückbewegung ist der simulierte Bewegungsablauf mit der Fitting-Funktion gegenüber der linearen Darstellung realistischer. Dies ist in der ersten 100cm Raumtiefe besonders merklich, da die Fitting-Funktion die Größenkonstanzleistung des visuellen Systems berücksichtigt. Die auf dem konventionellen Computerspielmarkt eingesetzten Algorithmen für die Darstellung von Vor- und Zurückbewegungen sind dagegen nahezu linear, welches dem Beobachter einen etwas unnatürlichen Seheindruck vermittelt. Die Fitting-Funktion könnte auch für die Simulation von Zeichentrickfilmen verwendet werden. Auch dort wird die Größenkonstanzleistung des visuellen Systems nicht berücksichtigt. Aber gerade diese Konstanzleistung gestaltet die Größenvariation der wahrgenommenen Objekte bei Entfernungsänderungen. Dies ist besonders im orthostereoskopischen Bereich merklich.
Starting from the first observation of the halo phenomenon 20 years ago, more and more neutron-rich light nuclei were observed. The study of unstable nuclear systems beyond the dripline is a relatively new branch of nuclear physics. In the present work, the results of an experiment at GSI (Darmstadt) with relativistic beams of the halo nuclei 8He, 11Li and 14Be with energies of 240, 280 and 305 MeV/nucleon, respectively, impinging on a liquid hydrogen target are discussed. Neutron/proton knockout reactions lead to the formation of unbound systems, followed by their immediate decay. The experimental setup, consisting of the neutron detector LAND, the dipole spectrometer ALADIN and different types of tracking detectors, allows the reconstruction of the momentum vectors of all reaction products measured in coincidence. The properties of unbound nuclei are investigated by reconstructing the relative-energy spectra as well as by studying the angular correlations between the reaction products. The observed systems are 9He, 10He, 10Li, 12Li and 13Li. The isotopes 12Li and 13Li are observed for the first time. They are produced in the 1H(14Be, 2pn)12Li and 1H(14Be, 2p)13Li knockout reactions. The obtained relative-energy spectrum of 12Li is described as a single virtual s-state with a scattering length of as = -22;13.7(1.6) fm. The spectrum of 13Li is interpreted as a resonance at an energy of Er = 1.47(13) MeV and a width of Gamma ~ 2 MeV superimposed on a broad correlated background distribution. The isotope 10Li is observed after one-neutron knockout from the halo nucleus 11Li. The obtained relative-energy spectrum is described by a low-lying virtual s-state with a scattering length as = -22.4(4.8) fm and a p-wave resonance with Er = 0.566(14) MeV and Gamma = 0.548(30) MeV, in agreement with previous experiments. The observation of the nucleus 8He in coincidence with one or two neutrons, as a result of proton knockout from 11Li, allows to reconstruct the relative-energy spectra for the heavy helium isotopes, 9He and 10He. The low-energy part of the 9He spectrum is described by a virtual s-state with a scattering length as = -3.16(78) fm. In addition, two resonance states with l 6= 0 at energies of 1.33(8) and 2.4 MeV are observed. For the 10He spectrum, two interpretations are possible. It can be interpreted as a superposition of a narrow resonance at 1.42(10) MeV and a broad correlated background distribution. Alternatively, the spectrum is being well described by two resonances at energies of 1.54(11) and 3.99(26) MeV. Additionally, three-body energy and angular correlations in 10He and 13Li nuclei at the region of the ground state (0 < ECnn < 3 MeV) are studied, providing information about structure of these unbound nuclear systems.
This thesis is devoted to the developement of a classical model for the study of the energetics and stability of carbon nanotubes. The motivation behind such a model stems from the fact that production of nanotubes in a well-controlled manner requires a detailed understanding of their energetics. In order to study this different theoretical approaches are possible, ranging from the computationally expensive quantum mechanical first principle methods to the relatively simple classical models. A wisely developed classical model has the advantage that it could be used for systems of any possible size while still producing reasonable results. The model developed in this thesis is based on the well-known liquid drop model without the volume term and hence we call it liquid surface model. Based on the assumption that the energy of a nanotube can be expressed in terms of its geometrical parameters like surface area, curvature and shape of the edge, liquid surface model is able to predict the binding energy of nanotubes of any chirality once the total energy and the chiral indices of it are known. The model is suggested for open end and capped nanotubes and it is shown that the energy of capped nanotubes is determined by five physical parameters, while for the open end nanotubes three parameters are sufficient. The parameters of the liquid surface model are determined from the calculations performed with the use of empirical Tersoff and Brenner potentials and the accuracy of the model is analysed. It is shown that the liquid surface model can predict the binding energy per atom for capped nanotubes with relative error below 0.3% from that calculated using Brenner potential, corresponding to the absolute energy difference being less than 0.01 eV. The influence of the catalytic nanoparticle on top of which a nanotube grows, on the nanotube energetics is also discussed. It is demonstrated that the presence of catalytic nanoparticle changes the binding energy per atom in such a way that if the interaction of a nanotube with the catalytic nanoparticle is weak then attachment of an additional atom to a nanotube is an energetically favourable process, while if the catalytic nanoparticle nanotube interaction is strong , it becomes energetically more favourable for the nanotube to collapse. The suggested model gives important insights in the energetics and stability of nanotubes of different chiralities and is an important step towards the understanding of nanotube growth process. Young modulus and curvature constant are calculated for single-wall carbon nanotubes from the paremeters of the liquid surface model and demonstrated that the obtained values are in agreement with the values reported earlier both theoretically and experimentally. The calculated Young modulus and the curvature constant were used to conclude about the accuracy of the Tersoff and Brenner potentials. Since the parameters of the liquid surface model are obtained from the Tersoff and Brenner potential calculations, the agreement of elastic properties derived from these parameters corresponds to the fact that both potentials are capable of describing the elastic properties of nanotubes. Finally, the thesis discuss the possible extension of the model to various systems of interest.
This thesis investigates the jet-medium interactions in a Quark-Gluon Plasma using a hydrodynamical model. Such a Quark-Gluon Plasma represents a very early stage of our universe and is assumed to be created in heavy-ion collisions. Its properties are subject of current research. Since the comparison of measured data to model calculations suggests that the Quark-Gluon Plasma behaves like a nearly perfect liquid, the medium created in a heavy-ion collision can be described applying hydrodynamical simulations. One of the crucial questions in this context is if highly energetic particles (so-called jets), which are produced at the beginning of the collision and traverse the formed medium, may lead to the creation of a Mach cone. Such a Mach cone is always expected to develop if a jet moves with a velocity larger than the speed of sound relative to the medium. In that case, the measured angular particle distributions are supposed to exhibit a characteristic structure allowing for direct conclusions about the Equation of State and in particular about the speed of sound of the medium. Several different scenarios of jet energy loss are examined (the exact form of which is not known from first principles) and different mechanisms of energy and momentum loss are analyzed, ranging from weak interactions (based on calculations from perturbative Quantum Chromodynamics, pQCD) to strong interactions (formulated using the Anti-de-Sitter/Conformal Field Theory Correspondence, AdS/CFT). Though they result in different angular particle correlations which could in principle allow to distinguish the underlying processes (if it becomes possible to analyze single-jet events), it is shown that the characteristic structure observed in experimental data can be obtained due to the different contributions of several possible jet trajectories through an expanding medium. Such a structure cannot directly be connected to the Equation of State. In this context, the impact of a strong flow created behind the jet is examined which is common to almost all jet deposition scenarios. Besides that, the transport equations for dissipative hydrodynamics are discussed which are fundamental for any numerical computation of viscous effects in a Quark-Gluon Plasma.
The study of the electromagnetic structure of hadrons plays an important role in understanding the nature of matter. In particular the emission of lepton pairs out of the hot and dense collision zone in heavy-ion reactions is a promising probe to investigate in-medium properties of hadrons and in general the properties of matter under such extreme conditions. The first experimental observation of an enhanced di-electron yield in the invariant-mass region 0:3 - 0:7 GeV/c2 in p+Be collisions at 4:9 GeV/u beam energy [2] was announced by the DLS collaboration [1]. Recent results of the HADES collaboration show a moderate enhancement above n Dalitz decay contributions for 12C+12C at 1 and 2 GeV/u [3, 4] confirming the DLS results. There are several theoretical explanations of this observation, most of them focusing on possible in-medium modifications of the properties of vector mesons. At low beam energies the question whether the observed excess is related to any in-medium effects remains open because of uncertainties in the description of elementary di-electron sources. In this work the di-electron production in p+p and d+p reactions at a kinetic beam energy of 1:25 GeV/u measured by the HADES spectrometer is discussed. At Ekin = 1:25 GeV/u, i.e. below the n meson production threshold in proton-proton reactions, the delta Dalitz decay is expected to be the most abundant source above the pi 0 Dalitz decay region. The observed large difference in di-electron production in p+p and d+p collisions suggests that di-electron production in the d+p system is dominated by the n+p interaction. In order to separate delta Dalitz decays and np bremsstrahlung the di-electron yield observed in p+p and n+p reactions, both measured at the same beam energy, has been compared. The main interest here is the investigation of iso-spin effects in baryonic resonance excitations and the off-shell production of vector mesons [5]. We indeed observe a large difference in di-electron production in p+p and n+p reactions. Results of these studies will be compared to recent calculations. We will also present our experimentally defined cocktail for heavy-ion data. At much higher beam energies experimental results of the CERES [6] and NA60 [7] collaborations also show an enhancement in the invariant mass region 0:3 - 0:7 GeV/c2, in principle similar to the situation in DLS. A strong excess of lepton pairs observed by recent high energy heavy-ion dilepton experiments hint to a strong influence of baryons, however no data exist at highly compressed baryonic matter, achievable in heavy-ion collisions from 8 - 45 GeV/u beam energy. These conditions would allow to study the expected restoration of chiral symmetry by measuring in-medium modifications of hadronic properties, an experimental program which is foreseen by the future CBM experiment at FAIR. The experimental challenge is to suppress the large physical background on the one hand and to provide a clean identification of electrons on the other hand. In this work, strategies to reduce the combinatorial background in electron pair measurements with the CBM detector are discussed. The main goal is to study the feasibility of effectively reducing combinatorial background with the currently foreseen experimental setup, which does not provide electron identification in front of the magnetic field.