Universitätspublikationen
Refine
Year of publication
Document Type
- Doctoral Thesis (378) (remove)
Has Fulltext
- yes (378) (remove)
Is part of the Bibliography
- no (378) (remove)
Keywords
- HADES (4)
- Beschleuniger (3)
- CBM (3)
- FAIR (3)
- Heavy Ion Collisions (3)
- Monte-Carlo-Simulation (3)
- QCD (3)
- Quark-Gluon-Plasma (3)
- Schwerionenphysik (3)
- Teilchenbeschleuniger (3)
Institute
- Physik (378) (remove)
The aim of this work is to develop an effective equation of state for QCD, having the correct asymptotic degrees of freedom, to be used as input for dynamical studies of heavy ion collisions. We present an approach for modeling an EoS that respects the symmetries underlying QCD, and includes the correct asymptotic degrees of freedom, i.e. quarks and gluons at high temperature and hadrons in the low-temperature limit. We achieve this by including quarks degrees of freedom and the thermal contribution of the Polyakov loop in a hadronic chiral sigma-omega model. The hadronic part of the model is a nonlinear realization of an sigma-omega model. As the fundamental symmetries of QCD should also be present in its hadronic states such an approach is widely used to describe hadron properties below and around Tc. The quarks are introduced as thermal quasi particles, coupling to the Polyakov loop, while the dynamics of the Polyakov loop are controlled by a potential term which is fitted to reproduce pure gauge lattice data. In this model the sigma field serves a the order parameter for chiral restoration and the Polyakov loop as order parameter for deconfinement. The hadrons are suppressed at high densities by excluded volume corrections. As a next step, we introduce our new HQ model equation of state in a microscopic+macroscopic hybrid approach to heavy ion collisions. This hybrid approach is based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The present implementation allows to compare pure microscopic transport calculations with hydrodynamic calculations using exactly the same initial conditions and freeze-out procedure. The effects of the change in the underlying dynamics - ideal fluid dynamics vs. non-equilibrium transport theory - are explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic and directed flow are shown to be not sensitive to changes in the EoS while the smaller mean free path in the hydrodynamic evolution reflects directly in higher flow results which are consistent with the experimental data. This finding indicates qualitatively that physical mechanisms like viscosity and other non equilibrium effects play an essentially more important role than the EoS when bulk observables like flow are investigated. In the last chapter, results for the thermal production of MEMOs in nucleus-nucleus collisions from a combined micro+macro approach are presented. Multiplicities, rapidity and transverse momentum spectra are predicted for Pb+Pb interaction at different beam energies. The presented excitation functions for various MEMO multiplicities show a clear maximum at the upper FAIR energy regime making this facility the ideal place to study the production of these exotic forms of multistrange objects.
In order to fully understand the new state of matter formed in heavy ion collisions, it is vital to isolate the always present final state hadronic contributions within the primary Quark-Gluon Plasma (QGP) experimental signatures. Previously, the hadronic contributions were determined using the properties of the known mesons and baryons. However, according to Hagedorn, hadrons should follow an exponential mass spectrum, which the known hadrons follow only up to masses of M = 2 GeV. Beyond this point the mass spectrum is flat, which indicates that there are "missing" hadrons, that could potentially contribute significantly to experimental observables. In this thesis I investigate the influence of these "missing" Hagedorn states on various experimental signatures of QGP. Strangeness enhancement is considered a signal for QGP because hadronic interactions (even including multi-mesonic reactions) underpredict the hadronic yields (especially for strange particles) at the Relativistic Heavy Ion Collider, RHIC. One can conclude that the time scales to produce the required amount of hadronic yields are too long to allow for the hadrons to reach chemical equilibrium within the lifetime of a cooling hadronic fireball. Because gluon fusion can quickly produce strange quarks, it has been suggested that the hadrons are born into chemical equilibrium following the Quantum Chromodynamics (QCD) phase transition. However, we show here that the missing Hagedorn states provide extra degrees of freedom that can contribute to fast chemical equilibration times for a hadron gas. We develop a dynamical scheme in which possible Hagedorn states contribute to fast chemical equilibration times of X X pairs (where X = p, K, Lambda, or Omega) inside a hadron gas and just below the critical temperature. Within this scheme, we use master equations and derive various analytical estimates for the chemical equilibration times. Applying a Bjorken picture to the expanding fireball, the hadrons can, indeed, quickly chemically equilibrate for both an initial overpopulation or underpopulation of Hagedorn resonances. We compare the thermodynamic properties of our model to recent lattice results and find that for both critical temperatures, Tc = 176 MeV and Tc = 196 MeV, the hadrons can reach chemical equilibrium on very short time scales. Furthermore the ratios p/pi, K/pi , Lambda/pi, and Omega/pi match experimental values well in our dynamical scenario. The effects of the "missing" Hagedorn states are not limited to the chemical equilibration time. Many believe that the new state of matter formed at RHIC is the closet to a perfect fluid found in nature, which implies that it has a small shear viscosity to entropy density ratio close to the bound derived using the uncertainty principle. Our hadron resonance gas model, including the additional Hagedorn states, is used to obtain an upper bound on the shear viscosity to entropy density ratio, eta/s, of hadronic matter near Tc that is close to 1/(4pi). Furthermore, the large trace anomaly and the small speed of sound near Tc computed within this model agree well with recent lattice calculations. We also comment on the behavior of the bulk viscosity to entropy density ratio of hadronic matter close to the phase transition, which qualitatively has a different behavior close to Tc than a hadron gas model with only the known resonances. We show how the measured particle ratios can be used to provide non-trivial information about Tc of the QCD phase transition. This is obtained by including the effects of highly massive Hagedorn resonances on statistical models, which are generally used to describe hadronic yields. The inclusion of the "missing" Hagedorn states creates a dependence of the thermal fits on the Hagedorn temperature, TH , and leads to a slight overall improvement of thermal fits. We find that for Au+Au collisions at RHIC at sqrt{sN N} = 200 GeV the best square fit measure, chi^2 , occurs at TH = Tc = 176 MeV and produces a chemical freeze-out temperature of 172.6 MeV and a baryon chemical potential of 39.7 MeV.
In this work we study compact stars, i.e. neutron stars, as cosmic laboratories for the nuclear matter. With a mass of around 1 - 3 solar masses and a radius of around 10km, compact stars are very dense and, besides nucleons, can contain exotic matter such as hyperons or quark matter. The KaoS collaboration studied nuclear matter for densities up to 2-3 times saturation density by analysing kaon multiplicities from Au+Au and C+C collisions. The results show that nuclear matter in the corresponding density region is very compressible, with a compressibility of <200MeV. For such soft nuclear equations of state the maximum masses of neutron stars are ca. 1.8 - 1.9 solar masses, whereas the central densities are higher than 5 times nuclear saturation density and therefore point towards a possible phase transition to quark matter. If quark matter would be present in the interior of neutron stars, so-called hybrid stars, it could be produced already during their birth in supernova explosions. To study this we implement a quark matter phase transition in a hadronic equation of state which is used in supernova simulations. Supernova simulations of low and intermediate mass progenitors and two different bag constants show a collapse of the proto neutron star due to the softening of the equations of state in the quark-hadron mixed phase. The stiffening of the equation of state for pure quark matter halts the collapse and leads to the production of a second shock wave. The second shock wave is energetic enough to lead to an explosion of the star and produces a neutrino burst when passing the neutrinospheres. Furthermore, first studies of the longtime cooling of hybrid stars show, that colour superconductivity can significantly influence the cooling behaviour of hybrid stars, if all quarks form Cooper Pairs. For the so-called CSL phase (colour-spin locking) with pairing energies of several MeV, the cooling of the quark phase is suppressed and the hybrid star appears as a pure hadronic star.
Direct photon emission from heavy-ion collisions has been calculated and compared to available experimental data. Three different models have been combined to extract direct photons from different environments in a heavy-ion collision: Thermal photons from partonic and hadronic matter have been extracted from relativistic, non-viscous 3+1-dimensional hydrodynamic calculations. Thermal and non-thermal photons from hadronic interactions have been calculated from relativistic transport theory. The impact of different physics assumptions about the thermalized matter has been studied. In pure transport calculations, a viscous hadron gas is present. This is juxtaposed with ideal gases of hadrons with vacuum properties, hadrons which undergo a chiral and deconfinement phase transition and with a system that has a strong first-order phase transition to a deconfined ideal gas of quarks and gluons in the hybrid model calculations with the various Equations of State. The models used for the determination of photons from both hydrodynamic and transport calculations have been elucidated and their numerical properties tested. The origin of direct photons, itemised by emission stage, emission time, channel and baryon number density, has been investigated for various systems, as have the transverse momentum spectra and elliptic flow patterns of direct photons. The differences of photon emission rates from a thermalized transport box and the hadronic photon emission rates that are used in hydrodynamic calculations are found to be very similar, as are the spectra from calculations of heavy-ion collisions with transport model and hybrid model with hadronic Equation of State. Taking into account the full (vacuum) spectral function of the rho-meson decreases the direct photon emission by approximately 10% at low photon transverse momentum. The numerical investigations show that the parameter with the largest impact on the direct photon spectra is the time at which the hydrodynamic description is started. Its variation shows deviations of one to two orders of magnitude. In the regime that can be considered physical, however, the variation is less than a factor of 3. Other parameters change the direct photon yield by up to approximately 20%. In all systems that have been considered -- heavy-ion collisions at E_lab = 35 AGeV and 158 AGeV, (s_NN)**1/2 = 62.4 GeV, 130 GeV and 200 GeV -- thermal emission from a system with partonic degrees of freedom is greatly enhanced over that from hadronic systems, while the difference between the direct photon yields from a viscous and a non-viscous hadronic system (transport vs. hydrodynamics) is found to be very small. Predictions for direct photon emission in central U+U-collisions at 35 AGeV have been made. Since non-soft photon sources are very much suppressed at this energy, experimental results should very easily be able to distinguish between a medium that is entirely hadronic and a system that undergoes a phase transition from partonic to hadronic matter. In the case of lead-lead collisions at 158 AGeV, the situation is not so clear. In central collisions, the complete direct photon spectra including prompt photons seem to favour hadronic emission sources, while the partonic calculations only slightly overpredict the data. In peripheral collisions at the same energy, the hadronic contribution is more than one order of magnitude smaller than the prompt photon contribution, which fits the available experimental data. A similar picture presents itself at higher energies. At RHIC energies, however, the difference between transport calculations and hadronic hybrid model calculations is largest. Hybrid model calculations with partonic degrees of freedom can describe the experimental results in gold-gold collisions at 200 GeV. The elliptic flow component of direct photon emission is found to be consistently positive at small transverse momenta. This means that the initial photon emission from a non-flowing medium does not completely overshine the emission patterns from later stages. High-pt photons dominantly come from the beginning of a heavy-ion collision and therefore do not carry the directed information of an evolving medium.
This dissertation is devoted to the study of thermodynamics for quantum gauge theories.The poor convergence of quantum field theory at finite temperature has been the main obstacle in the practical applications of thermal QCD for decades. In this dissertation I apply hard-thermal-loop perturbation theory, which is a gauge-invariant reorganization of the conventional perturbative expansion for quantum gauge theories to the thermodynamics of QED and Yang-Mills theory to three-loop order. For the Abelian case, I present a calculation of the free energy of a hot gas of electrons and photons by expanding in a power series in mD/T, mf /T and e2, where mD and mf are the photon and electron thermal masses, respectively, and e is the coupling constant.I demonstrate that the hard-thermal-loop perturbation reorganization improves the convergence of the successive approximations to the QED free energy at large coupling, e ~ 2. For the non-Abelian case, I present a calculation of the free energy of a hot gas of gluons by expanding in a power series in mD/T and g2, where mD is the gluon thermal mass and g is the coupling constant. I show that at three-loop order hard-thermal-loop perturbation theory is compatible with lattice results for the pressure, energy density, and entropy down to temperatures T ~ 2 - 3 Tc. The results suggest that HTLpt provides a systematic framework that can be used to calculate static and dynamic quantities for temperatures relevant at LHC.
Im Zentrum dieser Arbeit stehen die Überstrukturphasen des Yb-Cu-Systems. Als Ausgangspunkt für die Kristallzüchtung wird die kongruent schmelzende Verbindung YbCu4:5 gewählt. Um einen genauen Einblick in das Erstarrungsverhalten dieser Phase zu erhalten, werden zunächst im Bereich zwischen 17.3 und 22.4 at-% Yb eine Reihe von DSC-Messungen durchgeführt. Die Ergebnisse lassen sich nur bedingt mit den in der Literatur veröffentlichten Phasendiagrammen (Moffat [Mo92] bzw. Massalski [Ma90] und Giovannini et al. [Gi08]) vereinbaren. Zwar kann eine kongruent schmelzende Phase der Zusammensetzung YbCu4:5 nachgewiesen werden, die Messungen deuten aber die Existenz zusätzlicher Verbindungen an, die allerdings mit Hilfe der EDX-Analyse nicht weiter spezifiziert werden können. Um diese Phasen genauer zu analysieren, werden Einkristallzüchtungsversuche nach der Bridgman-Methode im Bereich zwischen 19 und 19.2 at-% Yb durchgeführt und mittels Einkristallbeugungsmethoden (SC-XRD und SAED) charakterisiert. Auf diese Weise können neben YbCu4:5 die bisher noch unbekannten berstrukturphasen YbCu4:4 und YbCu4:25 nachgewiesen werden, deren Schmelztemperaturen mittels DSC-Untersuchungen zu 934(2)°C und 931(3)°C bestimmt werden. Die Entdeckung der beiden Verbindungen bestätigt die von Cerný et al. [Ce03] bisher nur theoretisch vorhergesagte Existenz der Überstrukturphasen SECux (x=4.4 und 4.25) für das Yb-Cu-System. Mit Hilfe von Polarisations- und Rasterelektronenmikroskopie und unter Anwendung der Laue-Methode wird das Wachstumsverhalten dieser Überstrukturphasen analysiert. Man beobachtet ein Schichtwachstum, wobei sich die Schichten parallel zur a- und b-Richtung ausbilden und in c-Richtung gestapelt vorliegen. Da eine zuverlässige Unterscheidung der YbCux-Verbindungen nur mit Hilfe von Einkristallbeugungsmethoden gelingt, wird im Rahmen dieser Arbeit untersucht, inwiefern eine Charakterisierung mittels Pulverdiffraktometrie möglich ist. Die Messungen mit Synchrotronstrahlung am ESRF in Grenoble erlauben eine eindeutige Unterscheidung der Überstrukturphasen allerdings nicht. Die Analyse des an das Überstrukturgebiet angrenzenden Zusammensetzungsbereichs von 12.5 bis 17.24 at-% Yb bestätigt die Existenz der Verbindung YbCu6:5, eine kupferärmere Phase der Zusammensetzung YbCu5 kann in den DSC-Experimenten nicht nachgewiesen werden. Die Messungen belegen die Existenz einer Phasenbreite von YbCu6:0+x mit 0 <= x <= 0:5 ist, was im Gegensatz zu dem von Giovannini et al. [Gi08] publizierten Phasendiagramm steht. SC-XRD-Aufnahmen an nach der Bridgman-Methode gezüchteten Einkristallen der Zusammensetzung YbCu6:31(9) untermauern das von Hornstra und Buschow [Ho72] gefundene Strukturmodell. Die Verschiebungen der Atompositionen bedingt durch den im Gegensatz zur YbCu5-Verbindung erhöhten Kupferanteil werden mit Hilfe der gemessenen und berechneten Paarverteilungsfunktion nachvollzogen. Phasendiagrammuntersuchungen und Einkristallzüchtungsergebnisse für weitere SE-Cu-Systeme (SE =Ho, Gd) bestätigen die Existenz der Verbindung HoCu4:5 und erhärten den Verdacht sowohl in diesem als auch in den anderen Systemen noch weitere Überstrukturphasen finden zu können.
In der Doktorarbeit wurde ein Verfahren zur Ermittlung der Schwerpunkthöhe eines Fahrzeugs aus den Messwerten von Sensoren, die serienmäßig in vielen geländegängigen Fahrzeugen verbaut sind, entwickelt. Dieses Verfahren benötigt nur die Signale von Sensoren des elektronischen Stabilitätssystems (ESP) und eines Fahrwerks mit Luftfeder. Um die Höhe des Schwerpunkts zu bestimmen, wurde ein Modell entworfen, das die Drehbewegung des Fahrzeugs um seine Längsachse beschreibt. Eine der unbekannten Größen in diesem Modell ist das Produkt m_g\Deltah, wobei mit m_g die gefederte Masse des Fahrzeugs und mit Deltah der Abstand zwischen dem Schwerpunkt und der Wankachse des Fahrzeugs bezeichnet wird. Die Höhe des Schwerpunkts wird berechnet, indem zu diesem Abstand der als bekannt vorausgesetzte Abstand der Wankachse von der Straße addiert wird. Es wurden drei Varianten des Modells betrachtet. Die eine Modellvariante (stationäres Modell) beschreibt das Fahrzeugverhalten nur in solchen Fahrsituationen exakt, in denen die Wankgeschwindigkeit und die Wankbeschleunigung vernachlässigbar klein sind. In dieser Modellvariante wurden die Federkräfte mit einem detaillierten Modell der Luftfeder berechnet. Eine Eingangsgröße dieses Modells ist der Druck in den Gummibälgen der Luftfeder. Um diesen Druck zu ermitteln, wurde ein Algorithmus auf dem Steuergerät des Luftfedersystems implementiert. Um die Genauigkeit des Luftfedermodells zu testen und um die Abmessungen bestimmter Bauteile der Luftfeder zu ermitteln, wurden Messungen am Federungsprüfstand durchgeführt und eine Methode entwickelt, wie aus diesen Messungen die gesuchten Größen berechnet werden können. Bei den zwei übrigen Modellvarianten (dynamisches Modell) gelten die Einschränkung für die Fahrsituationen nicht. Die einzelnen Varianten des dynamischen Modells unterscheiden sich darin, dass das eine Mal die Feder- und Dämpferkonstanten als bekannt vorausgesetzt und das andere Mal aus den Sensorsignalen geschätzt werden. Passend zu jeder Modellvariante wurde ein Verfahren gewählt, mit dem Schätzwerte für das Produkt m_g\Deltah berechnet wurden. Des Weiteren wurde auch eine Methode entwickelt, mit der die Masse mg geschätzt wurde, ohne zuvor ein Wert für das Produkt m_g\Deltah zu ermitteln. Die Schätzwerte wurden unter Verwendung von Daten ermittelt, die bei einer Simulation und bei Messfahrten gewonnen worden sind. Das Ergebnis des Vergleiches der betrachteten Modellvarianten ist, dass die eine Variante des dynamischen Modells zum Teil falsche Werte für m_g\Deltah liefert, weil die Modellgleichungen ein nicht beobachtbares System bilden. Die andere Variante dieses Modells liefert nicht bei jeder Beladung exakte Werte, was vor allem daran liegt, dass in den Modellgleichungen dieses Modells ein konstanter Wert für die Federsteifigkeit angenommen wird. Bei Fahrzeugen mit Luftfeder ändert sich jedoch dieser Wert in Abhängigkeit von der Fahrzeugmasse. Die Werte von m_g\Deltah und mg können am genauesten mit dem stationären Modell ermittelt werden. Des Weiteren wurden Methoden entwickelt, die die Genauigkeit der durch den Schätzalgorithmus ermittelten Werte verbessern. So wurde zusätzlich zu dem Produkt m_g\Deltah und der Masse mg auch die Verteilung des Gewichtes auf die Vorder- und Hinterachse betrachtet. Es wurde ermittelt, welche Zusammenhänge zwischen dieser Verteilung und dem Produkt m_g\Deltah sowie zwischen dieser Verteilung und der Masse des Fahrzeugs bestehen. So konnte der Fehler in den Schätzwerten dieser Größen minimiert werden. Außerdem wurde auch der Zusammenhang zwischen dem Produkt m_g\Deltah und der Masse des Fahrzeugs ermittelt. Damit konnten die Schätzwerte dieser Größen genauer bestimmt werden. Aus den so gewonnenen Werten kann die Schwerpunkthöhe von einem Mercedes ML auf etwa 8cm genau berechnet werden. Diese Genauigkeit reicht aus, um das elektronische Stabilitätsprogramm auf die aktuelle Beladung des Fahrzeugs abzustimmen und damit einen Gewinn an Agilität für dieses Fahrzeug zu realisieren.
The goal of this project is to develop a framework for a cell that takes in consideration its internal structure, using an agent-based approach. In this framework, a cell was simulated as many sub-particles interacting to each other. This sub-particles can, in principle, represent any internal structure from the cell (organelles, etc). In the model discussed here, two types of sub-particles were used: membrane sub-particles and cytosolic elements. A kinetic and dynamic Delaunay triangulation was used in order to define the neighborhood relations between the sub-particles. However, it was soon noted that the relations defined by the Delaunay triangulation were not suitable to define the interactions between membrane sub-particles. The cell membrane is a lipid bilayer, and does not present any long range interactions between their sub-particles. This means that the membrane particles should not be able to interact in a long range. Instead, their interactions should be confined to the two-dimensional surface supposedly formed by the membrane. A method to select, from the original three-dimensional triangulations, connections restricted to the two-dimensional surface formed by the cell membrane was then developed. The algorithm uses as starting point the three-dimensional Delaunay triangulation involving both internal and membrane sub-particles. From this triangulation, only the subset of connections between membrane sub-particles was considered. Since the cell is full of internal particles, the collection of the membrane particles' connections will resemble the surface to be obtained, even though it will still have many connections that do not belong to the restricted triangulation on the surface. This "thick surface" was called a quasi-surface. The following step was to refine the quasi-surface, cutting out some of the connections so that the ones left made a proper surface triangulation with the membrane points. For that, the quasi-surface was separated in clusters. Clusters are defined as areas on the quasi-surface that are not yet properly triangulated on a two-dimensional surface. Each of the clusters was then re-triangulated independently, using re-triangulation methods also developed during this work. The interactions between cytosolic elements was given by a Lennard-Jones potential, as well as the interactions between cytosolic elements and membrane particles. Between only membrane particles, the interactions were given by an elastic interaction. For each particle, the equation of motion was written. The algorithm chosen to solve the equations of motion was the Verlet algorithm. Since the cytosol can be approximated as a gel, it is reasonable to suppose that the sub-cellular particles are moving in an overdamped environment. Therefore, an overdamped approximation was used for all interactions. Additionally, an adaptive algorithm was used in order to define the size of the time step used in each interaction. After the method to re-triangulate the membrane points was implemented, the time needed to re-triangulate a single cluster was studied, followed by an analysis on how the time needed to re-triangulate each point in a cluster varied with the cluster size. The frequency of appearance for each cluster size was also compared, as this information is necessary to guarantee that the total time needed by to re-triangulate a cell is convergent. At last, the total time spent re-triangulating a surface was plotted, as well as a scaling for the total re-triangulation time with the variation. Even though there is still a lot to be done, the work presented here is an important step on the way to the main goal of this project: to create an agent-based framework that not only allows the simulation of any sub-cellular structure of interest but also provides meaningful interaction relations to particles belonging to the cell membrane.
Statistical physics of power flows on networks with a high share of fluctuating renewable generation
(2010)
Renewable energy sources will play an important role in future generation of electrical energy. This is due to the fact that fossil fuel reserves are limited and because of the waste caused by conventional electricity generation. The most important sources of renewable energy, wind and solar irradiation, exhibit strong temporal fluctuations. This poses new problems for the security of supply. Further, the power flows become a stochastic character so that new methods are required to predict flows within an electrical grid. The main focus of this work is the description of power flows in a electrical transmission network with a high share of renewable generation of electrical energy. To define an appropriate model, it is important to understand the general set-up of a stable system with fluctuating generation. Therefore, generation time series of solar and wind power are compared to load time series for whole Europe and the required balancing or storage capacities analyzed. With these insights, a simple model is proposed to study the power flows. An approximation to the full power flow equations is used and evaluated with Monte-Carlo simulations. Further, approximations to the distributions of power flows along the links are analytically derived. Finally, the results are compared to the power flows calculated from the generation and load data.
In nature, society and technology many disordered systems exist, that show emergent behaviour, where the interactions of numerous microscopic agents result in macroscopic, systemic properties, that may not be present on the microscopic scale. Examples include phase transitions in magnetism and percolation, for example in porous unordered media, biological, and social systems. Also technological systems that are explicitly designed to function without central control instances, like their prime example the Internet, or virtual networks, like the World Wide Web, which is defined by the hyperlinks from one web page to another, exhibit emergent properties. The study of the common network characteristics found in previously seemingly unrelated fields of science and the urge to explain their emergence, form a scientific field in its own right, the science of complex networks. In this field, methodologies from physics, leading to simplification and generalization by abstraction, help to shift the focus from the implementation's details on the microscopic level to the macroscopic, coarse grained system level. By describing the macroscopic properties that emerge from microscopic interactions, statistical physics, in particular stochastic and computational methods, has proven to be a valuable tool in the investigation of such systems. The mathematical framework for the description of networks is graph theory, in hindsight founded by Euler in 1736 and an active area of research since then. In recent years, applied graph theory flourished through the advent of large scale data sets, made accessible by the use of computers. A paradigm for microscopic interactions among entities that locally optimize their behaviour to increase their own benefit is game theory, the mathematical framework of decision finding. With first applications in economics e.g. Neumann (1944), game theory is an approved field of mathematics. However, game theoretic behaviour is also found in natural systems, e.g. populations of the bacterium Escherichia coli, as described by Kerr (2002). In the present work, a combination of graph theory and game theory is used to model the interactions of selfish agents that form networks. Following brief introductions to graph theory and game theory, the present work approaches the interplay of local self-organizing rules with network properties and topology from three perspectives. To investigate the dynamics of topology reshaping, coupling of the so called iterated prisoners' dilemma (IPD) to the network structure is proposed and studied in Chapter 4. In dependence of a free parameter in the payoff matrix, the reorganization dynamics result in various emergent network structures. The resulting topologies exhibit an increase in performance, measured by a variance of closeness, of a factor 1.2 to 1.9, depending in the chosen free parameter. Presented in Chapter 5, the second approach puts the focus on a static network structure and studies the cooperativity of the system, measured by the fixation probability. Heterogeneous strategies to distribute incentives for cooperation among the players are proposed. These strategies allow to enhance the cooperative behaviour, while requiring fewer total investments. Putting the emphasis on communication networks in Chapters 6 and 7, the third approach investigates the use of routing metrics to increase the performance of data packet transport networks. Algorithms for the iterative determination of such metrics are demonstrated and investigated. The most successful of these algorithms, the hybrid metric, is able to increase the throughput capacity of a network by a factor of 7. During the investigation of the iterative weight assignments a simple, static weight assignment, the so called logKiKj metric, is found. In contrast to the algorithmic metrics, it results in vanishing computational costs, yet it is able to increase the performance by a factor of 5.