Refine
Year of publication
Document Type
- Doctoral Thesis (593) (remove)
Has Fulltext
- yes (593)
Is part of the Bibliography
- no (593) (remove)
Keywords
- Quark-Gluon-Plasma (8)
- Schwerionenphysik (8)
- CERN (5)
- Heavy Ion Collisions (5)
- Ionenstrahl (5)
- LHC (5)
- Monte-Carlo-Simulation (5)
- Quantenchromodynamik (5)
- Schwerionenstoß (5)
- Teilchenbeschleuniger (5)
Institute
- Physik (593) (remove)
In this work we study the non-equilibrium dynamics of a quark-gluon plasma, as created in heavy-ion collisions. We investigate how big of a role plasma instabilities can play in the isotropization and equilibration of a quark-gluon plasma. In particular, we determine, among other things, how much collisions between the particles can reduce the growth rate of unstable modes. This is done both in a model calculation using the hard-loop approximation, as well as in a real-time lattice simulation combining both classical Yang-Mills-fields as well as inter-particle collisions. The new extended version of the simulation is also used to investigate jet transport in isotropic media, leading to a cutoff-independent result for the transport coefficient $hat{q}$. The precise determination of such transport coefficients is essential, since they can provide important information about the medium created in heavy-ion collisions. In anisotropic media, the effect of instabilities on jet transport is studied, leading to a possible explanation for the experimental observation that high-energy jets traversing the plasma perpendicular to the beam axis experience much stronger broadening in rapidity than in azimuth. The investigation of collective modes in the hard-loop limit is extended to fermionic modes, which are shown to be all stable. Finally, we study the possibility of using high energy photon production as a tool to experimentally determine the anisotropy of the created system. Knowledge of the degree of local momentum-space anisotropy reached in a heavy-ion collision is essential for the study of instabilities and their role for isotropization and thermalization, because their growth rate depends strongly on the anisotropy.
ALICE (A Large Ion Collider Experiment), is the dedicated heavy-ion experiment at the Large Hadron Collider (LHC) at CERN. It is optimised to reconstruct and identify the particles created in a lead-lead collision with a centre of mass energy of 5.5TeV. The main tracking detector is a large-volume time-projection chamber (TPC). With an active volume of about 88m^3 and a total readout area of 32.5m^2 it is the most challenging TPC ever build. A central electrode divides the 5m long detector into two drift regions. Each readout side is subdivided into 18 inner and 18 outer multi-wire proportional read-out chambers. The readout area is subdivide into 557568 pads, where each pad is read out by and electronics chanin. A complex calibration is needed in order to reach the design position-resolution of the reconstructed particle tracks of about 200um. One part of the calibration lies in understanding the electronic-response. The work at hand presents results of the pedestal and noise behaviour of the front-end electronics (FEE), measurements of the pulse-shaping properties of the FEE using results obtained with a calibration pulser and measurements performed with the laser-calibration system. The data concerned were taken during two phases of the TPC commissioning. First measurements were performed in the clean room where the TPC was built. After the TPC was moved underground and built into the experiment, a second round of commissioning took place. Noise measurements in the clean room revealed a very large fraction of pads with noise values larger than the design specifications. The unexpected high noise values could be explained by the 'ground bounce' effect. Two modifications helped to reduce this effect: A desynchronisation in the the start of the readout of groups of channels and a modification in the grounding scheme of the FEE. Further noise measurements were carried out after the TPC has been moved to the experimental area underground. Here even a larger fraction of channels showed too large noise values. This could be traced back to a common mode current injected by the electronics power supplies. To study the shaping properties of the FEE a calibration pulser was used. To generate signals in the FEE a pulse is injected to the cathode wires of the read-out chambers. Due to manufacturing tolerances slight channel-by-channel variations of the shaping properties are expected. This effects the determination of the arrival time as well as the measured integral signal of the induced charge and has to be corrected. The measured arrival time variations follow a Gaussian distribution with a width (sigma) of 6.2ns. This corresponds to an error of the cluster position of about 170um. The charge variations are on the level of 2.8%. In order to reach the intrinsic resolution on the measurement of the specific energy loss of the particles (6%) those variations have to be taken into account. The photons of the laser-calibration system are energetic enough to emit photo electrons off metallic surfaces. Most interesting for the detector calibration are photo electrons from the central electrode. The laser light is intense enough to get a signal in all readout channels of the TPC. Since the central electrode is a smooth surface, differences in the arrival time between sectors reveal mechanical displacements of the readout sectors and can be used to correct for this effect. In addition the measurements can be used to determine the electron drift velocity in the TPC gas. The drift velocity measurements have shown a vertical as well as a radial gradient. The first can be explained by the temperature gradient, which naturally builds up in the 5m high detector. The second gradient is most probably caused by a relative conical deformation of the readout plane and the central electrode.
A new era in experimental nuclear physics has begun with the start-up of the Large Hadron Collider at CERN and its dedicated heavy-ion detector system ALICE. Measuring the highest energy density ever produced in nucleus-nucleus collisions, the detector has been designed to study the properties of the created hot and dense medium, assumed to be a Quark-Gluon Plasma.
Comprised of 18 high granularity sub-detectors, ALICE delivers data from a few million electronic channels of proton-proton and heavy-ion collisions.
The produced data volume can reach up to 26 GByte/s for central Pb–Pb
collisions at design luminosity of L = 1027 cm−2 s−1 , challenging not only the data storage, but also the physics analysis. A High-Level Trigger (HLT) has been built and commissioned to reduce that amount of data to a storable value prior to archiving with the means of data filtering and compression without the loss of physics information. Implemented as a large high performance compute cluster, the HLT is able to perform a full reconstruction of all events at the time of data-taking, which allows to trigger, based on the information of a complete event. Rare physics probes, with high transverse momentum, can be identified and selected to enhance the overall physics reach of the experiment.
The commissioning of the HLT is at the center of this thesis. Being deeply embedded in the ALICE data path and, therefore, interfacing all other ALICE subsystems, this commissioning imposed not only a major challenge, but also a massive coordination effort, which was completed with the first proton-proton collisions reconstructed by the HLT. Furthermore, this thesis is completed with the study and implementation of on-line high transverse momentum triggers.
Different approaches are possible when it comes to modeling the brain. Given its biological nature, models can be constructed out of the chemical and biological building blocks known to be at play in the brain, formulating a given mechanism in terms of the basic interactions underlying it. On the other hand, the functions of the brain can be described in a more general or macroscopic way, in terms of desirable goals. This goals may include reducing metabolic costs, being stable or robust, or being efficient in computational terms. Synaptic plasticity, that is, the study of how the connections between neurons evolve in time, is no exception to this. In the following work we formulate (and study the properties of) synaptic plasticity models, employing two complementary approaches: a top-down approach, deriving a learning rule from a guiding principle for rate-encoding neurons, and a bottom-up approach, where a simple yet biophysical rule for time-dependent plasticity is constructed.
We begin this thesis with a general overview, in Chapter 1, of the properties of neurons and their connections, clarifying notations and the jargon of the field. These will be our building blocks and will also determine the constrains we need to respect when formulating our models. We will discuss the present challenges of computational neuroscience, as well as the role of physicists in this line of research.
In Chapters 2 and 3, we develop and study a local online Hebbian self-limiting synaptic plasticity rule, employing the mentioned top-down approach. Firstly, in Chapter 2 we formulate the stationarity principle of statistical learning, in terms of the Fisher information of the output probability distribution with respect to the synaptic weights. To ensure that the learning rules are formulated in terms of information locally available to a synapse, we employ the local synapse extension to the one dimensional Fisher information. Once the objective function has been defined, we derive an online synaptic plasticity rule via stochastic gradient descent.
In order to test the computational capabilities of a neuron evolving according to this rule (combined with a preexisting intrinsic plasticity rule), we perform a series of numerical experiments, training the neuron with different input distributions.
We observe that, for input distributions closely resembling a multivariate normal distribution, the neuron robustly selects the first principal component of the distribution, showing otherwise a strong preference for directions of large negative excess kurtosis.
In Chapter 3 we study the robustness of the learning rule derived in Chapter 2 with respect to variations in the neural model’s transfer function. In particular, we find an equivalent cubic form of the rule which, given its functional simplicity, permits to analytically compute the attractors (stationary solutions) of the learning procedure, as a function of the statistical moments of the input distribution. In this way, we manage to explain the numerical findings of Chapter 2 analytically, and formulate a prediction: if the neuron is selective to non-Gaussian input directions, it should be suitable for applications to independent component analysis. We close this section by showing how indeed, a neuron operating under these rules can learn the independent components in the non-linear bars problem.
A simple biophysical model for time-dependent plasticity (STDP) is developed in Chapter 4. The model is formulated in terms of two decaying traces present in the synapse, namely the fraction of activated NMDA receptors and the calcium concentration, which serve as clocks, measuring the time of pre- and postsynaptic spikes. While constructed in terms of the key biological elements thought to be involved in the process, we have kept the functional dependencies of the variables as simple as possible to allow for analytic tractability. Despite its simplicity, the model is able to reproduce several experimental results, including the typical pairwise STDP curve and triplet results, in both hippocampal culture and layer 2/3 cortical neurons. Thanks to the model’s functional simplicity, we are able to compute these results analytically, establishing a direct and transparent connection between the model’s internal parameters and the qualitative features of the results.
Finally, in order to make a connection to synaptic plasticity for rate encoding neural models, we train the synapse with Poisson uncorrelated pre- and postsynaptic spike trains and compute the expected synaptic weight change as a function of the frequencies of these spike trains. Interestingly, a Hebbian (in the rate encoding sense of the word) BCM-like behavior is recovered in this setup for hippocampal neurons, while dominating depression seems unavoidable for parameter configurations reproducing experimentally observed triplet nonlinearities in layer 2/3 cortical neurons. Potentiation can however be recovered in these neurons when correlations between pre- and postsynaptic spikes are present. We end this chapter by discussing the relation to existing experimental results, leaving open questions and predictions for future experiments.
A set of summary cards of the models employed, together with listings of the relevant variables and parameters, are presented at the end of the thesis, for easier access and permanent reference for the reader.
Zahlreiche physikalische Prozesse, wie Bremsstrahlung, Synchrotronstrahlung oder Radiative Rekombination verursachen die Emission linear hochpolarisierter Röntgenstrahlung. Dennoch wird technisch nutzbare hochpolarisierte Röntgenstrahlung derzeit fast ausschließlich von einigen wenigen hochspezialisierten Synchrotronlichtquellen oder Freie Elektronen Lasern zur Verfügung gestellt. In der vorliegenden Arbeit wurde der Radiative Einfang in die K-Schale von nacktem Xenon verwendet, um erstmals eine Quelle einstellbarer, monoenergetischer sowie hochpolarisierter Röntgenstrahlung (97%) in einer Speicherringumgebung zu realisieren. Zum Nachweis der Polarisation der Strahlung wurde erstmals auch ein neuartiger orts-, zeit- und energieauflösender Si(Li) Streifendetektor als Röntgenpolarimeter eingesetzt, mit dem die Beschränkungen traditioneller Compton - Polarimeter umgangen werden können. Der gemessene Grad hoher linearer Polarisation, der mit den Vorhersagen durch die Theorie übereinstimmt, ist durchaus bemerkenswert, da die hochpolarisierte Röntgensstrahlung in einem Stoßprozess zwischen einem unpolarisierten Ionenstrahl und einem unpolarisierten Gasjet zustande kam. Dies bedeutet, dass der Radiative Elektroneneinfang ein ideales Werkzeug darstellt, um hochpolarisierte, energetisch frei wählbare Röntgenstrahlung in einer Speicherringumgebung zu erzeugen. Die Entwicklung der neuen 2D Detektortechnologie eröffnet auch Möglichkeiten zur experimentellen Untersuchung der Details atomphysikalischer Vorgänge. So konnte im Rahmen dieser Arbeit durch die Kombination des verwendeten Detektors und der Beschleunigereinrichtung der GSI erstmals experimentell die lineare Polarisation der Strahlung des Radiativen Elektroneneinfangs in die energetisch partiell aufgelösten L-Unterschalen von nacktem Uran bestimmt werden. Zudem wurden neue und präzisere Werte für die Polarisation der Einfangstrahlung in die K-Schalen von nacktem und wasserstoffähnlichem Uran gemessen. Die theoretischen Vorhersagen zeigten eine starke Sensitivität von Messungen linearer Polarisation der bei dem Radiativen Elektroneneinfang emittierten Strahlung auf den Einfluss der insbesondere bei Schwerionen - Atom - Stößen zu berücksichtigenden höheren Ordnungen der Multipolentwicklung. Während die Effekte bei der Messung von Winkelverteilungen des Radiativen Elektroneneinfangs gerade bei den kleineren Winkeln im Bezug auf die Ionenstrahlachse im Laborsystem vergleichsweise gering ausfallen, ist hier ein sehr ausgeprägter Effekt der Depolarisation zu beobachten. Hier liegt der wesentliche Unterschied zwischen den in dieser Arbeit vorgestellten Messungen der linearen Polarisation der Strahlung des Radiativen Einfangs in Xenon sowie in Uran. Das Auftreten der starken Depolarisation veranschaulicht die starke Abhängigkeit der Polarisationscharakteristik des REC-Prozesses von der Kernladungszahl des Projektils. Abschließend sei der Schritt zu der erstmals für diese Arbeit verwendeten Messtechnik mit einem hochaufgelösten Streifendetektor hervorgehoben. Im Gegensatz zu früheren Polarisationsmessungen mit grob dimensionierten Pixeldetektoren waren zu der Gewinnung der hier vorgestellten Messungen praktisch keinerlei zusätzliche Annahmen oder Simulationen zu der Interpretation der gewonnenen Winkelverteilungen notwendig. So konnte mit dem System bereits während des Experimentes eine erste Abschätzung der linearen Polarisation der beobachteten Strahlung durchgeführt werden. Diese Tatsache wird es in naher Zukunft ermöglichen, das für die niederenergetische Röntgenstrahlung weitgehend neue ”Fenster” polarimetrischer Messungen für weitere atomphysikalische Prozesse zu öffnen.
Nanomaterials, i.e., materials that are manufactured at a very small spatial scale, can possess unique physical and chemical properties and exhibit novel characteristics as compared to the same material without nanoscale features. The reduction of size down to the nanometer scale leads to the abundance of potential applications in different fields of technology. For instance, tailoring the physicochemical properties of nanomaterials for modification of their interaction with a biological environment has been reflected in a number of biomedical applications.
Strategies to choose the size and the composition of nanoscale systems are often hindered by a limited understanding of interactions that are difficult to study experimentally. However, this goal can be achieved by means of advanced computer simulations. This thesis explores, from a theoretical and a computational viewpoints, stability, electronic and thermo-mechanical properties of nanoscale systems and materials which are related to biomedical applications.
We examine the ability of existing classical interatomic potentials to reproduce stability and thermo-mechanical properties of metal systems, assuming that these potentials have been fitted to describe ground-state properties of the perfect bulk materials.
It is found that existing classical interatomic potentials poorly describe highly-excited vibrational states when the system is far from the potential energy minimum. On the other hand, construction of a reliable computational model is essential for further development of nanomaterials for applications. A new interatomic potential that is able to correctly reproduce both the melting temperature and the ground-state properties of different metals, such as gold, platinum, titanium, and magnesium, by means of classical molecular dynamics simulations is proposed in this work. The suggested modification of a many-body potential has a general nature and can be utilized for similar numerical exploration of thermo-mechanical properties of a broad range of molecular and solid state systems experiencing phase transitions.
The applicability of the classical interatomic potentials to the description of nanoscale systems, consisting of several tens-hundreds of atoms, is also explored in this study. This issue is important, for instance, in the case of nanostructured materials, where grains or nanocrystals have a typical size of about a few nanometers. We validate classical potentials through the comparison with density-functional theory calculations of small
atomic clusters made of titanium and nickel. By this analysis, we demonstrate that the classical potentials fitted to describe ground-state properties of a bulk material can describe the energetics of nanoscale systems with a reasonable accuracy.
In this work, we also analyze electronic properties of nanometer-size nanoparticles made of gold, platinum, silver, and gadolinium; nanoparticles composed of these materials are of current interest for radiation therapy applications. We focus on the production of low-energy electrons, having the kinetic energy from a few electronvolts to several tens of electronvolts. It is currently established that the low-energy secondary electrons of such energies play an important role in the nanoscale mechanisms of biological damage resulting from ionizing radiation. We provide a methodology for analyzing the dynamic response of nanoparticles of the experimentally relevant sizes, namely of about several nanometers, exposed to ionizing radiation. Because of a large number of constituent atoms (about 1000 −10000 atoms) and consequently high computational costs, the electronic properties of such systems can hardly be described by means of ab initio methods based on a quantum-mechanical treatment of electrons, and this analysis should rely on model approaches. By comparing the response of smaller systems (of about 1 nm size) calculated within the ab initio- and the model framework, we validate this methodology and make predictions for the electron production in larger systems.
We have revealed that a significant increase in the number of the low-energy electrons emitted from nanometer-size noble metal nanoparticles arises from collective electron excitations formed in the systems. It is demonstrated that the dominating mechanisms of electron yield enhancement are related to the formation of plasmons excited in a whole system and of atomic giant resonances formed due to excitation of valence d electrons in individual atoms of a nanoparticle. Being embedded in a biological medium, the noble metal nanoparticles thus represent an important source of low-energy electrons, able to produce a significant irrepairable damage in biological systems.
A general methodology for studying electronic properties of nanosystems is used to make quantitative predictions for electron production by non-metal nanoparticles. The analysis illustrates that due to a prominent collective response to an external electric field, carbon nanoparticles embedded in a biological medium also enhance the production of low-energy electrons. The number of low-energy electrons emitted from carbon nanoparticles is demonstrated to be several times higher as compared to the case of liquid water.
The equation of state (EoS) of matter at extremely high temperatures and densities is currently not fully understood, and remains a major challenge in the field of nuclear physics. Neutron stars harbor such extreme conditions and therefore serve as celestial laboratories for constraining the dense matter EoS. In this thesis, we present a novel algorithm that utilizes the idea of Bayesian analysis and the computational efficiency of neural networks to reconstruct the dense matter equation of state from mass-radius observations of neutron stars. We show that the results are compatible with those from earlier works based on conventional methods, and are in agreement with the limits on tidal deformabilities obtained from the gravitational wave event, GW170817. We also observe that the resulting squared speed of sound from the reconstructed EoS features a peak, indicating a likely convergence to the conformal limit at asymptotic densities, as expected from quantum chromodynamics. The novel algorithm can also be applied across various fields faced with computational challenges in solving inverse problems. We further examine the efficiency of deep learning methods for analyzing gravitational waves from compact binary coalescences in this thesis. In particular, we develop a deep learning classifier to segregate simulated gravitational wave data into three classes: signals from binary black hole mergers, signals from binary neutron star mergers, or white noise without any signals. A second deep learning algorithm allows for the regression of chirp mass and combined tidal deformability from simulated binary neutron star mergers. An accurate estimation of these parameters is crucial to constrain the underlying EoS. Lastly, we explore the effects of finite temperatures on the binary neutron star merger remnant from GW170817. Isentropic EoSs are used to infer the frequencies of the rigidly rotating remnant and are noted to be significantly lower compared to previous estimates from zero temperature EoSs. Overall, this thesis presents novel deep learning methods to constrain the neutron star EoS, which will prove useful in future, as more observational data is expected in the upcoming years.
Construction and commissioning of a setup to study ageing phenomena in high rate gas detectors
(2014)
In high-rate heavy-ion experiments, gaseous detectors encounter big challenges in terms of degradation of their performance due to a phenomenon dubbed ageing. In this thesis, a setup for high precision ageing studies has been constructed and commissioned at the GSI detector laboratory. The main objective is the study of ageing phenomena evoked by materials used to build gaseous detectors for the Compressed Baryonic Matter (CBM) experiment at the future Facility for Antiproton and Ion Research (FAIR).
The precision of the measurement, e.g., of the gain of a gaseous detector, is a key element in ageing studies: it allows to perform the measurement at realistic rates in an acceptable time span. It is well known the accelerating ageing employing high intensity sources might produce misleading results. The primary objective is to build an apparatus which allows very accurate measurements and is thus sensitive to minute degradations in detector performance. The construction and commissioning of the
setup has been carried out in two steps. During the first step of this work, a simpler setup which already existed in the detector laboratory of GSI had been utilised to define all conditions related to ageing studies. The outcome of these studies defined the properties and characteristics that must be met to build and operate a new, sophisticated and precise setup. The already existing setup consisted of two identical Multi Wire Proportional Chambers (MWPCs), a gas mixing station, an 55Fe source, an x-ray generator, an outgassing box and stainless steel tubing. In a first step, the gain and electric field configuration of the MWPCs were simulated by a combination of a gas simulation (Magboltz) and electric field simulation program (Garfield). The performance and operating conditions of the chambers have been thoroughly characterised before utilising them in first preparatory ageing test. The main diagnostic parameter in ageing studies is the detector gain, thus it is mandatory for precise ageing studies to minimise the systematic and statistical variation of the pressure and temperature corrected gain. To achieve the required accuracy, several improvements of the chamber design and the gas system have been implemented. In addition, the temperature measurement has been optimised. During the preparatory tests, several ageing studies have been carried out. The ageing effect of seven materials and gases have been carried out during these tests: RTV-3145, Ar/CO2 gas, Durostone flushed with Ar/Isobutane gas, Vetronit G11, Vetronit G11 contaminated with Micro 3000 and Gerband 705. The results of these studies went into the design of the new sophisticated ageing setup. For example some tests revealed that there was, even after cleaning, a certain level of contamination with "ageing agents" in the existing setup, which made it imperative to ensure a very high level cleanness of all components during the construction of the setup. The curing period of some testing samples like glues or the gas flow rate were found to be very important factors that must be taken into account to obtain comparable results. Very important changes in the chamber design have been made, i.e., the aluminium-Kapton cathodes used in MWPCs have been replaced with multi-wire planes and the fibreglass housing of the chamber has been changed to metal. The second step started with building the new setup which was designed based on the findings from the first step. The new ageing setup consists of three MWPCs, two moving platforms, an 55Fe source, a copper-anode x-ray generator, two outgassing boxes, both flexible and rigid stainless steel tubes. Before fabrication of the chambers, simulations of their electric field and the gain have been done using Magboltz and Garfield programs. After that, the chambers were installed and tested. A 0.3% peak-to-peak residual variation of the corrected gain has been achieved. Finally, the complete setup has been operated with full functionality in no-ageing conditions during one week. This test revealed very stable gain in all three chambers. After that two materials (Gerban 705 and RTV-3145) have been inserted in the two outgassing boxes and tested. They revealed an ageing rate of about 0.3%/mC/cm and 3%/mC/cm respectively. The final test proves the stability and accuracy of the ageing measurements carried out with the ageing setup at the detector laboratory at GSI which is ready to conduct the envisaged systematic ageing studies.
In der Experimentierhalle der Physik am Campus Riedberg der Goethe – Universität wird gegenwärtig die Beschleunigeranlage FRANZ aufgebaut. FRANZ steht für Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum. Die Anlage bietet vielfältige Experimentiermöglichkeiten in der Untersuchung intensiver, gepulster Protonenstrahlen. Ein Forschungsschwerpunkt an den sekundären Neutronenstrahlen sind Messungen zur nuklearen
Astrophysik. Die Neutronen werden durch einen 2 MeV Protonenstrahl mittels der Reaktion 7Li (p, n) 7Be erzeugt. Die geplanten Experimente erfordern sowohl eine hier weltweit erstmals realisierte Pulsrepetitionsrate von bis zu 250 kHz bei Pulsströmen im 100 mA – Bereich als auch eine extreme Pulskompression auf eine Nanosekunde bei dann auftretenden Pulsströmen im Ampere – Bereich. Daneben ist auch ein Dauerstrich – Strahlbetrieb im mA – Strombereich möglich. Auch viele einzelne Beschleunigerkomponenten wie die Ionenquelle, der Chopper zur Pulsformung, die hochfrequent gekoppelte RFQ-IH-Kombination, der Rebuncher in Form einer CH – Struktur und der Bunchkompressor sind Neuentwicklungen. Mittlere Strahlleistungen von bis zu 24 kW treten im Niederenergiestrahltransportbereich auf, da die Ionenquelle grundsätzlich im Dauerstrich zu betreiben ist, auch bei Hochstrom mit hohen Pulsrepetitionsraten. Der Personen- und Geräteschutz spielt damit auch eine wesentliche Rolle bei der Auslegung des Kontrollsystems für FRANZ. Der Aufbau von FRANZ und seine wesentlichen Komponenten werden in Kapitel 2 erläutert. Die vielen unterschiedlichen Komponenten wie Hochspannungsbereich, Magneten, Hochfrequenzbauteile und Kavitäten, Vakuumbauteile, Strahldiagnose und Detektoren machen plausibel, dass auch das Kontrollsystem für eine solche Anlage speziell ausgelegt werden muss. In Kapitel 4 werden zum Vergleich die Konzepte zur Steuerung und Regelung aktueller, großer Beschleunigerprojekte aufgezeigt, nämlich für die „European Spallation Source ESS“ und für die „Facility for Antiproton and Ion Research FAIR“. In der vorliegenden Arbeit wurde die Ionenquelle als komplexe Beschleunigerkomponente ausgewählt, um Entwicklungen zur Steuerung und Regelung durchzuführen und zu testen. Zum Anfahren und Betreiben der Ionenquelle wurde ein Flussdiagramm (Abb. 5.15) entwickelt und realisiert. Im Detail wurden Untersuchungen zur Abhängigkeit der Heizkathodenparameter von der Betriebsdauer gemacht. Daraus konnte ein Algorithmus zur Vorhersage eines rechtzeitigen Filamentaustausches abgeleitet werden. Weiterhin konnte die Nachregelung des Kathodenheizstromes automatisiert werden, um damit die Bogenentladungsspannung innerhalb eines Intervalls von ± 0.5 V zu stabilisieren. Das Anfahren des Filamentstroms wurde ebenfalls automatisiert. Dazu wird die Vakuumdruckänderung in Abhängigkeit der Filamentstromerhöhung gemessen, ausgewertet und daraus der nächste erlaubte Stromerhöhungsschritt abgeleitet. Auf diese Weise wird der Betriebszustand schneller und kontrollierter erreicht als bei manuellem Hochfahren. Das Ziel eines unbemannten Ionenquellenbetriebs ist damit näher gerückt. In einem ersten Test zur Komponentensteuerung und zur Datenaufnahme wurde ein Ionenstrahl extrahiert und durch den ersten Fokussierungsmagneten – einen Solenoiden – transportiert. Es wurde der Erregungsstrom des Solenoiden sowie die Strahlenergie automatisch durchgefahren, die Daten abgespeichert und daraus ein Kontourplot zum gemessenen Strahlstrom hinter der Fokussierlinse erstellt (Abb. 5). Die vorliegende Arbeit beschäftigt sich nur mit den „langsamen“ Steuerungs- und Regelungsprozessen, während die schnellen Prozesse im Hochfrequenzregelungssystem unabhängig geregelt werden. Neben der Überwachung des Betriebszustandes aller Komponenten werden auch alle für den Service und die Personensicherheit benötigten Daten weggeschrieben. Das System basiert auf MNDACS (Mesh Networked Data Acquisition and Control System) und ist in JAVA geschrieben. MNDACS besteht aus einem Kernel, welcher die Komponententreiber-Software sowie den Netzwerkserver und das graphische Netzwerkinterface (GUI) betreibt. Weterhin gehört dazu das Driver Abstraction Layer (DAL), welches den Zugang zu weiteren Computern oder zu lokalen Treibern ermöglicht. CORBA stellt die Middleware für Netzwerkkommunikation dar. Dadurch wird Kommunikation mit externer Software geregelt, weiterhin wird die Umlegung von Kommunikation im Fall von Leitungsunterbrechungen oder einem lokalen Computerabsturz festgelegt. Es gibt bei FRANZ zwei Kontrollebenen: Über Ethernet läuft die „High Level Control“ und die Datenverarbeitung. Über die „Low Level Control“ läuft das Interlock – und Sicherheitssystem. Die Netzwerkverbindungen laufen über 1 Gb Ethernet Links, womit ein schneller Austausch auch bei lokalen Netzwerkstörungen noch möglich ist. Um bei Stromausfällen das Computersystem am Laufen zu halten, wurde im Rahmen dieser Arbeit ein „Uninterruptable Power Supply“ UPS beschafft und erfolgreich am Hochspannungsterminal getestet.