Refine
Year of publication
- 2005 (15) (remove)
Document Type
- Doctoral Thesis (15) (remove)
Language
- English (15) (remove)
Has Fulltext
- yes (15)
Is part of the Bibliography
- no (15)
Keywords
- ALICE (1)
- ALTRO (1)
- Absorptionsspektroskopie (1)
- Azimuthal angular distributions (1)
- Azimuthale Winkelverteilung (1)
- Cluster Hadronization (1)
- Colour (1)
- Delaunay-Triangulierung (1)
- Dirac-Gleichung (1)
- Dirac-Operator (1)
Institute
- Physik (15) (remove)
Experimente zum radiativen Elektroneneinfang (REC, Radiative Electron Capture), der Zeitumkehrung der Photoionisation, wie er in Stößen hochgeladener, relativistischer Schwerionen mit leichten Gasatomen auftritt, ermöglicht einen einzigartigen Zugang zum Studium der Photonen-Materie-Wechselwirkung im Bereich extrem starker Coulombfeldern. So ist die REC-Strahlung im relativistischen Bereich zum einen geprägt durch das Auftreten von höheren elektrischen und magnetischen Multipolordnungen und zum anderen durch starke Retardierungseffekte. In Folge dessen wurde der REC-Prozeß in den vergangen Jahren sehr detailliert untersucht, wobei sich die experimentelle und theoretische Forschung auf die Emissionscharakteristik der REC-Photonen konzentrierte, wie z.B. auf Untersuchungen von Winkelverteilungen und Linienprofilen. Mittlerweile kann der REC-Prozeß als ein - selbst für die schwersten Ionen - wohlverstandener Effekt angesehen werden. Allerdings entzog sich den Experimenten bislang eine zur Beschreibung der Photonenmission wesentlich Größe, näamlich die Polarisation der Strahlung. Die lineare Polarisation der REC-Strahlung, wie sie in Stößen zwischen leichten Atomen und den schwersten, hochgeladenen Ionen vorhergesagt wird, war der Gegenstand der vorliegende Arbeit, in der es erstmals gelang, die diese für den konkreten Fall des Einfangs in die K-Schale von nackten Uranionen nachzuweisen und im Detail zu untersuchen. Die hierzu notwendigen experimentellen Untersuchungen erfolgten am Speicherring ESR der GSI-Darmstadt für das Stoßsystem U92+ -> N2 und für Projektilenergien, die im Bereich zwischen 98 und 400 MeV/u lagen. Besonders hervorzuheben ist der Einsatz eines segmentierten Germaniumdetektors, der speziell für den Nachweis linear polarisierter Strahlung im Energiebereich oberhalb 100 keV entwickelte wurde. Die lineare Polarisation der Strahlung wurde hierbei durch eine Analyse der Comptonstreuung innerhalb des Detektors gewonnen. Die durch eine präzise Analyse der Comptonstreuverteilungen gewonnenen Daten zeigen eine ausgeprägte lineare Polarisierung der REC-Strahlung in der Streuebene, die zudem eine starke Abhängigkeit als Funktion der Stoßenergie und des Beobachtungwinkels aufweist. Der detaillierte Vergleich mit nicht-relativistischen und relativistischen Vorhersagen ermöglichte darüberhinaus den Nachweis für das Auftreten starker relativistischer Effekte, die sich allerdings depolarisierend auswirken. Das Experiment wurde am internen Target des ESR-Speicherrings durchgeführt, wobei der Photonennachweis mittels mehrerer Ge(i)-Detektoren erfolgte, die die Ionen-Target-Wechselwirkungszone unter Beobachtungswinkeln zwischen nahe Null und 150 Grad einsahen. Alle Photonendetektoren wurden in Koinizidenz mit einem Teilchendetektor betrieben, um so die volle Charakteristik des REC-Prozesses zu erfassen, also den Einfang eines Targetelektrons in die nackten Uranionen (U92+) unter Emission eines Photons. Für den Polarisationsnachweis entscheidend war der Einsatz eines Germanium-Pixel-Detektors, der abwechselnd unter den Winkeln von 60 und 90 Grad betrieben wurde. Dieser Detektor verfügt über eine 4x4 Pixelmatrix (Pixelgröße: 7x7 mm), wobei die elektronische Information jedes Pixels (Energiesignale und schnelle Zeitsignale) separat registriert und aufgezeichnet wurde. Hierdurch war es möglich Ereignisse, die koinzident in zwei Pixeln erfolgten, zu detektieren und zu analysieren. Dies ist die eigentliche Voraussetzung für den Nachweis der linearen Polarisation bei hohen Photonenenergien, bei dem die Abhängigkeit des differenziellen Wirkungsquerschnitts für Comptonstreuung von der linearen Polarisation der einfallenden Photonen ausgenutzt wird (siehe Klein-Nishina Formel Eq. 2.7). Der Nachweis der Comptonstreuung erfolgt hierbei durch die Detektion des Compton-Rückstoßelektrons (deltaE) und des gestreuten Comptonphotons (hw'), die jeweils separat, aber koinzident in zwei unterschiedlichen Segmenten des Detektors nachgewiesen werden. Hier sei betont, dass für Germanium bereits ab Photonenenergien von ca. 160 keV die Absorption der Strahlung durch den Compton-Effekt über die Photoabsorption dominiert und somit das Ausnutzen des Compton-Effekts prinzipiell eine sehr effektive Technik ist. Der Auswertung der Datenfkam wesentlich zugute, dass der Germanium-Detektor über eine im Vergleich zu Szintillations- oder Gaszählern gute Energieauflösung von ca. 1.8 keV bei 122 keV verfügt. Somit kann durch Bilden der Summenenergie hw = hw' + deltaE für koinzidente Ereignisse die Energie des einfallenden Photons (hw) rekonstruieren werden und als zwingende Bedingung dafür herangezogen werden, dass es sich bei dem Ereignis im Detektor um ein Compton-Event gehandelt hat. Für den Fall linearer Polarisation ist eine wesentliche Aussage der Klein-Nishina-Formel, dass die maximale Intensität für die Compton gestreuten Photonen senkrecht zur Polarisationsebene zu erwarten ist. Tatsächlich zeigen bereits die während des Experiments aufgenommenen Rohdaten für den Fall der untersuchten REC-Strahlung, die durch den Einfang in die K-Schale des Projektils entsteht, dass es sich hierbei um eine stark polarisierte Strahlung handelt, wobei eine erhöhte Intensität für Comptonstreuung senkrecht zur Stoßebene (für den REC-Prozeß definiert durch die Ionenstrahlachse und den Impuls des REC-Photons) festgestellt wurde (vgl. Fig. 7.3). Zur genauen qualitativen Analyse der Meßdaten wurden alle möglichen Pixelkombinationen der (4x4) Detektorgeometrie ausgewertet, wobei jedoch koinzidente Ereignisse benachbarter Segmente ausgeschlossen wurden, um den hier vorhandenenen Einfluß elektronischer Übersprecher zu eliminieren. Zudem erfolgte die Analyse der Daten unter Berücksichtigung verschiedenster Effekte, die einen Einfluß auf die Nachweiseffizienzen für die Compton gestreuten Photonen haben könnten. An prominenter Stelle ist hier die Korrektur zu nennen, die durch die Detektordicke von 1,5 cm und der Pixelgröße von 7x7 cm2 hervorgerufen wird. Zu betonen ist hier, dass für die Auswertung nur relative Effizienzen eine Rolle spielen und so der Einfluß systematischer Fehler, hervorgerufen durch Effizienzkorrekturen, stark reduziert werden konnte (für eine so gewonnene, vollständige Compton-Streuverteilung sei auf Abbildung 9.1 verwiesen, in der die Intensitätsverteilung für Compton-Streuung dargestellt ist). Es sei auch hervorgehoben, dass der Nachweis der Polarisation durch Messungen von vollständigen Compton-Intensitätverteilung im Detektor erfolgte, was das hier diskutierte Experiment wesentlich von konventionellen Polarisationsexperimenten für harte Röntgen- und gamma-Strahlung unterscheidet. Üblicherweise wird in diesen Experimenten die Comptonstreuung ausschließlich in der Reaktionsebene und senkrecht dazu nachgewiesen. Generell weisen die in der vorliegenden Arbeit gewonnen Compton-Streuverteilungen für den K-REC-Prozeß ein ausgeprägtes Maxium senkrecht zur Reaktionsebene auf und bestätigen somit den bereits aus den Rohdaten abgeleiteten Befund, dass die Polarisationsebene der KREC Strahlung in der Reaktionsebene des Stosses liegt. In der Tat kann dieser Befund für alle Energien und Beobachtungswinkel bestätigt werden, die in dem hier diskutierten Experiment verwendet wurden. Hier sei zudem darauf hingewiesen, dass es durch die Erfassung der vollständigen Compton-Streuverteilung möglich war, die Orientierung der Polarisationsebene in Bezug auf die Stoßebene mit hoher Präzision zu erfassen. So konnte z.B. bei der Stossenergie von 400 MeV/u und dem Winkel von 90 Grad, die Orientierung der Comptonstreuverteilung in Bezug auf die Stoßebene zu ph=90 Grad bestimmt werden. Dieser Befund könnte für die Planung zukünftiger Experimente zum Nachweis polarisierter Ionenstrahlen entscheidend sein, da eine Abweichung von der ph = 90 Grad Symmetrie nur durch das Vorhandensein polarisierter Teilchen erklärt werden kann. Dieser Effekt, der in neuesten theoretischen Behandlungen im Detail untersucht wurde, stellt gleichsam einen neuen Zugang zur Bestimmung des Polarisationsgrads der Projektile dar. Hierdurch wird die Stärke der hier angewandten Technik verdeutlicht, die auf dem Einsatz eines ortsempfindlichen Germanium-Pixel- Detektors beruht. Die Bestimmung des genauen Polarisationsgrades für die K-REC-Strahlung erfolgte durch eine X2-Anpassung der Klein-Nishina-Formel an die experimentellen Daten. Die hieraus resultierenden Daten zeigen für alle Strahlenergien und Beobachtungsgwinkel eine starke Polarisation von etwa 80%, wobei die experimentelle Unsicherheit im 10% Bereich liegt. Letztere ist im wesentlichen auf die statistische Genauigkeit zurückzuführen. Die Daten wurden zudem eingehend mit theoretischen Vorhersagen verglichen. Die Theorie stützt sich auf eine vollständige relativistische Beschreibung des REC-Prozesses unter Verwendung exakter Wellenfunktionen für das Kontinuum und den 1s Zustand in wasserstoffartigem Uran. Typischer weise mußten bei den Rechnungen sowohl elektrische wie auch magnetische Multipolterme bis hin zu L=20 verwendet werden, um Konvergenz zu erreichen. Der Vergleich zeigt eine hervorragende Übereinstimmung zwischen Experiment und Theorie. Zudem verdeutlicht der Vergleich mit der ebenfalls diskutierten Vorhersage der nicht-relativistischen Dipolnäherung die Bedeutung relativistischer Effekte (vor allem das Auftreten höherer elektrischer und magnetischer Multipole), die für die Emission der REC-Strahlung bei hohen, relativistischen Energien und hohem Z charakteristisch sind. Offensichtlich wirken sich diese Effekte stark depolarisierend aus. Dass in der Tat eine Zunahme der depolarisierenden Effekte mit einer Zunahme der Strahlenergie verbunden ist, wird auch durch die Daten dokumentiert, die für den Beobachtungswinkel von 60 Grad als Funktion des Projektilenergie untersucht wurden. Die in der vorliegenden Arbeit gewonnenen Resultate für die Polarisation der REC-Strahlung ebenso wie die neuartige Experimenttechnik, die hierbei zum Einsatz kam, lassen für die nahe Zukunft eine Serie von weiteren Polarisations-Experimenten erwarten. Hierbei könnte der REC-Strahlung und deren Polarisation als Mittel zur Diagnostik und zum Nachweis des Polarisationsgrades gespeicherter Ionenstrahlen eine Schlüsselrolle zukommen. Als Detektorsysteme werden hierzu zwei-dimensionale Germanium- und Silizium-Streifen-Detektoren zum Einsatz kommen bzw. Kombinationen aus zweidimensionalen Silizium- und Germanium-Detektoren, sogenannte Compton-Teleskope. Diese Compton-Polarimeter, die gegenwärtig für neue Experimentvorhaben am ESR-Speicherring entwickelt werden, verfügen über eine wesentlich verbesserte Ortsauflösung (z.B. 1x1 mm2) und somit über eine wesentlich gesteigerte Nachweiseffizienz für die Comptonstreuung (ein bis zwei Größenordnungen). Hierdurch sollte es möglich sein, den für Polarisationexerperimente zugänglichen Energiebereich wesentlich auszudehnen, sodass selbst die charakteristische Strahlung der Schwerionen (ca. 50 bis 100 keV) für solche Experimente zugänglich wird.
The Kaon-Spectrometer (KaoS) at the heavy-ion synchrotron (SIS) at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt has been used to study production and propagation of K+ and K- mesons from Au+Au collisions at a kinetic beam energy of 1.5 AGeV. This energy for K+ mesons is close to the corresponding production threshold in binary nucleon-nucleon collisions and far below for K- mesons. The azimuthal angular distributions of particles as a function of the collision centrality and particle transverse momenta have been measured. The properties of strange mesons are expected to be modified by the in-medium meson-baryon potential. Theoretical calculations show that the superposition of the scalar and vector potentials leads to a small repulsive K+N and a strong attractive K-N potential. Additionally, the interaction of kaons and antikaons with nuclear matter is different. The strangeness conservation law inhibits the absorption probability of K+ mesons as they contain an s-quark. K- mesons, however, interact with nucleons via strangenessexchange (K- + N ->Y + pion, where Y = lambda, sigma). Moreover, the reverse process (pion + Y -> K- + N) is the dominant production mechanism of K- mesons at SIS energies. The azimuthal angular emission patterns of kaons are expected to be sensitive to the in-medium potentials. An enhanced out-of-plane emission of K+ mesons was observed in Au+Au reactions at 1.0 AGeV and 1.5 AGeV, and also in Ni+Ni at 1.93 AGeV. The out-of-plane emission of K+ mesons in Au+Au reactions at 1.0 AGeV was interpreted as a consequence of a repulsive K+N potential in the nuclear medium, however, recent transport calculations show that the emission patterns obtained in Au+Au at 1.5 AGeV and Ni+Ni at 1.93 AGeV are additionally influenced by the re-scattering of kaons. For K- mesons the calculations predict an almost isotropic emission pattern due to the attractive K-N potential which counteracts the absorption of K- mesons in the spectator fragments. In Ni+Ni collisions at 1.93 AGeV the azimuthal distribution of K- mesons has been found to be isotropic. In this case, however, the spectators are rather small and have large relative velocities. In addition, the delay of antikaon emission due to strangenessexchange reaction minimizes the interaction with the spectators. As a consequence the sensitivity of the K- meson emission pattern to the K-N in-medium potential is reduced. In Au+Au collisions we found a dependence of the K- meson azimuthal emission pattern on the transverse momentum. The antikaons registered with pt < 0.5 GeV/c are preferentially emitted in the reaction plane and the particles with pt > 0.5 GeV/c show strong out-of-plane enhancement. The emission patterns of K- can be explained in terms of two competing phenomena: one of them is indeed the influence of the attractive K-N potential, however, the second one originates from the strangeness-exchange process.
In this thesis, we opened the door towards a novel estimation theory for homogeneous vectors and have taken several steps into this new and uncharted territory. Present state of the art for homogeneous estimation problems treats such vectors p 2 Pn as unit vectors embedded in Rn+1 and approximates the unit hypersphere by a tangent plane (which is a n-dimensional real space, thus having the same number of degrees of freedom as Pn). This approach allows to use known and established methods from real space (e.g. the variational approach which leads to the FNS algorithm), but it only works well for small errors and has several drawbacks: • The unit sphere is a two-sheeted covering space of the projective space. Embedding approaches cannot model this fact and therefore can cause a degradation of estimation quality. • Linearization breaks down if distributions are not highly concentrated (e.g. if data configurations approach degenerate situations). • While estimation in tangential planes is possible with little error, the characterization of uncertainties with covariance matrices is much more problematic. Covariance matrices are not suited for modelling axial uncertainties if distributions are not concentrated. Therefore, we linked approaches from directional statistics and estimation theory together. (Homogeneous) TLS estimation could be identified as central model for homogeneous estimation and links to axial statistics were established. In the first chapters, a unified estimation theory for the point data and axial data was developed. In contrast to present approaches, we identified axial data as a specific data model (and not just as directional data with symmetric probability density function); this led to the development of novel terms like axial mean vectors, axial variances and axial expectation values. Like a tunnel which is constructed from both ends simultaneously, we also drilled from the parameter estimation side towards directional/axial statistics in the second part. The presentation of parameter estimation given in this thesis deviates strongly from all known textbooks by presenting homogeneous estimation problems as a distinguished class of problems which calls for different estimation tools. Using the results from the first part, the TLS solution can be interpreted as the weighted anti-mean vector of an axial sample. This link allows to use our results from axial statistics; for instance, the certainty of the anti-mode (i.e. of the TLS solution!) can be described with a weighted Bingham distribution (see (3.91)). While present approaches are only interested in the eigenvector of the some matrix, we can now exploit the whole mean scatter matrix to describe TLS solution and its certainty. Algorithms like FNS, HEIV or renormalization were presented in a common context and linked to each other. One central result is that all iterative homogeneous estimation algorithms essentially minimize a series of evolving Rayleigh coefficients which corresponds to a series of (converging?) cost functions. Statistical optimization is only possible if we clearly identify every step as what it exactly is. For instance, the vague statement “solving Xp ... 0” means nothing but setting ˆp := arg minp pTXp pT p . We identified the most complex scenario for which closed form optimal solutions are possible (in terms of axial statistics: the type-I matrix weighted model). The IETLS approach which is developed in this thesis then solves general type-II matrix weighted problems with an iterative solution of a series of type-I matrix weighted problems. This approach also allows to built converging schemes including robust and/or constrained estimation – in contrast to other approaches which can have severe convergence problems even without such extensions if error levels are not low. Chapter 6 then is another big step forward. We presented the theoretical background of homogeneous estimation by introducing novel concepts like singular vector unbiasedness of random matrices and solved the problem of optimal estimation for correlated data. For instance, these results could be used for better estimation of local image orientation / optical flow (see section 7.2). At the end of this thesis, simulations and experiments for a few computer vision applications were presented; besides orientation estimation, especially the results for robust and constrained estimation for fundamental matrices is impressive. The novel algorithms are applicable for a lot of other applications not presented here, for instance camera calibration, factorization algorithm formulti-view structure from motion, or conic fitting. The fact that this work paved the way for a lot of further research is certainly a good sign.
I derive a general effective theory for hot and/or dense quark matter. After introducing general projection operators for hard and soft quark and gluon degrees of freedom, I explicitly compute the functional integral for the hard quark and gluon modes in the QCD partition function. Upon appropriate choices for the projection operators one recovers various well-known effective theories such as the Hard Thermal Loop/ Hard Dense Loop Effective Theories as well as the High Density Effective Theory by Hong and Schaefer. I then apply the effective theory to cold and dense quark matter and show how it can be utilized to simplify the weak-coupling solution of the color-superconducting gap equation. In general, one considers as relevant quark degrees of freedom those within a thin layer of width 2 Lambda_q around the Fermi surface and as relevant gluon degrees of freedom those with 3-momenta less than Lambda_gl. It turns out that it is necessary to choose Lambda_q << Lambda_gl, i.e., scattering of quarks along the Fermi surface is the dominant process. Moreover, this special choice of the two cutoff parameters Lambda_q and Lambda_gl facilitates the power-counting of the numerous contributions in the gap-equation. In addition, it is demonstrated that both the energy and the momentum dependence of the gap function has to be treated self-consistently in order to determine the imaginary part of the gap function. For quarks close to the Fermi surface the imaginary part is calculated explicitly and shown to be of sub-subleading order in the gap equation.
In the classical Dirac equation with strong potentials, called overcritical, a bound state reaches the negative continuum. In QED the presence of a static overcritical external electric field leads to a charged vacuum and indicates spontaneous particle creation when the overcritical field is switched on. The goal of this work is to clarify whether this effect exists, i.e. if it can be uniquely defined and proved, in time-dependent physical processes. Starting from a fundamental level of the theory we check all mathematical and interpretational steps from the algebra of fields to the very effect. In the first, theoretical part of this thesis we introduce the mathematical formulation of the classical and quantized Dirac theory with their most important results. Using this language we define rigorously the notion of spontaneous particle creation in overcritical fields. First, we give a rigorous definition of resonances as poles of the resolvent or the Green's function and show how eigenvalues become resonances under Hamiltonian perturbations. In particular, we consider essential for overcritical potentials perturbation of eigenvalues at the edge of the continuous spectrum. Next, we gather various adiabatic theorems and discuss well-posedness of the scattering in the adiabatic limit. Then, we construct Fock space representations of the field algebra, study their equivalence and give a unitary implementer of all Bogoliubov transformations induced by unitary transformations of the one-particle Hilbert space as well as by the projector (or vacuum vector) changes as long as they lead to unitarily equivalent Fock representations. We implement in Fock space self-adjoint and unitary operators from the one-particle space, discussing the charge, energy, evolution and scattering operators. Then we introduce the notion of particles and several particle interpretations for time-dependent processes with a different Fock space at every instant of time. We study how the charge, energy and number of particles change in consequence of a change of representation or in implemented evolution or scattering processes, what is especially interesting in presence of overcritical potentials. Using this language we define rigorously the notion of spontaneous particle creation. Then we look for physical processes which show the effect of vacuum decay and spontaneous particle creation exclusively due to the overcriticality of the potential. We consider several processes with static as well as suddenly switched on (and off) static overcritical potentials and conclude that they are unsatisfactory for observation of the spontaneous particle creation. Next, we consider properties of general time-dependent scattering processes with continuous switch on (and off) of an overcritical potential and show that they also fail to produce stable signatures of the particle creation due to overcriticality. Further, we study and successfully define the spontaneous particle creation in adiabatic processes, where the spontaneous antiparticle is created as a result of a resonance (wave packet) decay in the negative continuum. Unfortunately, they lead to physically questionable pair production as the adiabatic limit is approached. Finally, we consider extension of these ideas to non-adiabatic processes involving overcritical potentials and argue that they are the best candidate for showing the spontaneous pair creation in physical processes. Demanding creation of the spontaneous antiparticle in the state corresponding to the overcritical resonance rather quick than slow processes should be considered, with a possibly long frozen overcritical period. In the second part of this thesis we concentrate on a class of spherically symmetric square well potentials with a time-dependent depth. First, we solve the Dirac equation and analyze the structure and behaviour of bound states and appearance of overcriticality. Then, by analytic continuation we find and discuss the behaviour of resonances in overcritical potentials. Next, we derive and solve numerically (introducing a non-uniform continuum discretization for a consistent treatment of narrow peaks) a system of differential equations (coupled channel equations) to calculate particle and antiparticle production spectra for various time-dependent processes including sudden, quick, slow switch on and off of a sub- and overcritical potentials. We discuss in detail how and under which conditions an overcritical resonance decays during the evolution giving rise to the spontaneous production of an antiparticle. We compare the antiparticle production spectrum with the shape of the resonance in the overcritical potential. We study processes, where the overcritical potentials are switched on at different speed and are possibly frozen in the overcritical phase. We prove, in agreement with conclusions of the theoretical part, that the peak (wave packet) in the negative continuum representing a dived bound state partially follows the moving resonance and partially decays at every stage of its evolution. This continuous decay is more intensive in slow processes, while in quick processes the wave packet more precisely follows the resonance. In the adiabatic limit, the whole decay occurs already at the edge of the continuum, resulting in production of antiparticles with vanishing momentum. In contrast, in quick switch on processes with delay in the overcritical phase, the spectrum of the created antiparticles agrees best with the shape of the resonance. Finally, we address the question how much information about the time-dependent potential can be reconstructed from the scattering data, represented by the particle production spectrum. We propose a simple approximation method (master equation) basing on an exponential, decoherent decay of time-dependent resonances for prediction of particle creation spectra and obtain a good agreement with the results of full numerical calculations. Additionally, we discuss various sources of errors introduced by the numerical discretization, find estimations for them and prove convergence of the numerical schemes.
This thesis is devoted to the study of Micro Structured Electrode (MSE) sustained discharges. Innovative approaches in this work are i) the implementation of MSE arrays for high-pressure plasma generation and ii) the use of diode laser atomic absorption spectroscopy for investigating sub-millimetric discharges. By means of MSE arrays the discharge gap is scaled down to the sub-millimetric range and accordingly the working pressure could be increased up to atmospheric. It should be underlined that besides the ease of use, since expensive vacuum equipment is not required, high-pressure discharges offer also a high density of active species. A MSE consists of holes, regularly distributed in a composite sheet made of two metal layers separated by an insulator. The electrodes and insulator thickness and the diameter of the holes are in the 100 micrometer range. Based on these microstructures stable non-filamentary DC discharges are generated in noble gases and gas mixtures at pressures up to 1000 mbar. The MSE sustained discharge can be considered as a normal glow discharge whereby the excitation and ionization efficiency is increased by the specific electrode configuration (hollow cathode geometry). Large area high-pressure plasma can be achieved by parallel operation of a large number of microdischarges. Parallel operation of up to 200 microdischarges without individual ballast was proven for pressures up to 300 mbar. Furthermore, arrays of resistively decoupled microdischarges were operated up to atmospheric pressure. Spectral investigations have revealed the presence of highly energetic electrons (20 eV), a large density of atoms in metastable states (1013 cm-3) and a high electron density (1015 cm-3). Although the plasma confined inside the hole of the MSE may reach gas temperatures up to 1000 K, the ambient gas temperature immediately above the microstructure exceeds only slightly the room temperature. The reactivity of the MSE sustained discharge was demonstrated in respect to waste gas decomposition and surface treatment. The MSE arrays are providing a non-equilibrium high-pressure plasma, which is very promising for surface processing, plasma chemistry and generation of UV radiation.
This thesis presents a model for the dynamical description of deconfined quark matter created in ultra-relativistic heavy ion collisions, treating quarks and antiquarks as classical point particles subject to a colour-dependent, Cornell-type potential interaction. The model provides a dynamical handle for hadronization via the recombination of quarks and antiquarks in colour neutral clusters. Gluons are not included explicitly in the model,but are described in an effective manner by the means of the potential interaction. The model includes four different quark flavours (up, down, strange and charm) and uses current masses for the quarks. The dynamical evolution of a system of colour charges subject to the Hamiltonian equations of motion of the model yields the formation of colour neutral clusters of quarks and antiquarks, which are subject only to a small remaining interaction, the strong interquark potential notwithstanding. These clusters can be mapped onto hadrons and hadronic resonances. Thus, the model allows a dynamical description of quarks degrees of freedom in heavy ion collisions, including a recombination scheme for hadronization. The thermal properties of the model turn pout to be very satisfying. The model shows a transition from a confining phase to a deconfined phase with rising temperature, going hand in hand with a softest point in the equation of state and a rise of energy density and pressure to the Stefan-Boltzmann limit of a gas of quarks and antiquarks. Moreover, the potential interaction is screened in the deconfined phase. For the dynamical description of ultra-relativistic heavy ion collision, the qMD model is coupled to UrQMD as a generator for its initial conditions. In this way, a fully dynamical description of the expansion and hadronization of the fireball created in such collisions can be achieved. Non-equilibrium aspects of the expansion dynamics and hadronization by recombination of quarks and antiquarks are discussed in detail, and a comparison with experimental data of collisions at the CERN-SPS is presented. The big advantage of the qMD model is the possibility to study cluster formation, including exotic clusters, and fluctuations in a dynamical manner. As an example, event-by-event fluctuations in electric charge are studied. Such fluctuations have been proposed as a clear criterion to distinguish a deconfined system from a hadrons gas. However, experimental data show hadron gas fluctuation measures even at RHIC, where deconfinement is taken for granted. We will see how the dynamics of quark recombination washes out the quark-gluon plasma signal in the fluctuation criterion. Moreover, we will discuss briefly the problem of entropy at recombination. In a second application, the formation of exotic hadronic clusters, larger than usual mesons and baryons, is studied. Such clusters could provide new measures for the thermalization and homogenization of a deconfined gas of colour charges. Moreover, number estimates for exotic clusters from recombination are considerably lower than corresponding predictions from thermal models, providing a clear difference between statistical hadronization and hadronization via quark recombination. A detailed analysis is provided for pentaquark candidates such as the Theta-Plus. It turns out that the distribution of exotic states over strangeness, isospin, and spin could provide a sensitive measure for thermalization and decorrelation in the deconfined quark phase, if it could be measured.
This work is dedicated to the investigation of nuclear matter at non-zero temperatures within an effective hadronic model based on the Walecka model. It includes fermions as well as a vector omega meson and a scalar sigma meson where for the latter a quartic self-interaction has been considered. The coupling constants have been adapted to the saturation properties of infinite nuclear matter. A set of self-consistent Schwinger-Dyson equations has been set up for all included particles within the Cornwall-Jackiw-Tomboulis formalism. This has been expanded to non-zero temperatures via the imaginary time formalism. Beside tree-level two different stages of approximations have been considered: the Hartree approximation which takes into account the double-bubble diagram for the scalar meson, and an improved approximation where in addition two-particle irreducible sunset diagrams for all fields were included. In the Hartree-approximation the Schwinger-Dyson equations can be solved by quasi-particle ansaetze, while in the improved approximation spectral functions with non-zero widths have to be introduced. The Schwinger-Dyson equations are solved by the fully dressed propagators. Comparing the two levels of approximation shows the influence of finite widths on the temperature dependence of the particle properties. The consideration of finite widths in fact has a significant influence on the transition from a phase of heavy nucleons to a transition of light nucleons, observed in the Walecka-model. The temperature dependence is weakend when finte widths are taken into account.
Jet physics in ALICE
(2005)
This work aims at the performance of the ALICE detector for the measurement of high-energy jets at mid-pseudo-rapidity in ultra-relativistic nucleus-nucleus collisions at LHC and their potential for the characterization of the partonic matter created in these collisions. In our approach, jets at high energy with E_{T}>50 GeV are reconstructed with a cone jet finder, as typically done for jet measurements in hadronic collisions. Within the ALICE framework we study its capabilities of measuring high-energy jets and quantify obtainable rates and the quality of reconstruction, both, in proton-proton and in lead-lead collisions at LHC conditions. In particular, we address whether modification of the jet fragmentation in the charged-particle sector can be detected within the high particle-multiplicity environment of the central lead-lead collisions. We comparatively treat these topics in view of an EMCAL proposed to complete the central ALICE tracking detectors. The main activities concerning the thesis are the following: a) Determination of the potential for exclusive jet measurements in ALICE. b) Determination of jet rates that can be acquired with the ALICE setup. c) Development of a parton-energy loss model. d) Simulation and study of the energy-loss effect on jet properties.
In the present work, the Heidelberg electron beam ion trap (EBIT) at the Max-Planck-Institute für Kernphysik (MPIK) has been used to produce, trap highly charged argon ions and study their magnetic dipole (M1) forbidden transitions. These transitions are of relativistic origin and, hence, provide unique possibilities to perform precise studies of relativistic effects in many electron systems. In this way, the transitions energies of the 1s22s22p for the 2P3/2 - 2P1/2 transition in Ar13+ and the 1s22s2p for the 3P1 - 3P2 transition in Ar14+, for 36Ar and 40Ar isotopes were compared. The observed isotopic effect has confirmed the relativistic nuclear recoil effect corrections due to the finite nuclear mass in a recent calculation made by Tupitsyn [TSC03], in which major inconsistencies of earlier theoretical methods have been corrected for the first time. The finite mass, or recoil effect, composed of the normal mass shift (NMS), and the specific mass shift (SMS) were corrected for relativistic contributions, RNMS and RSMS. The present experimental results have shown that the recoil effects on the Breit level are indeed very important, as well as the effects of the correlated relativistic dynamics in a many electron ion.