Refine
Year of publication
- 2016 (20) (remove)
Document Type
- Doctoral Thesis (20) (remove)
Language
- English (20) (remove)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
- lattice (1)
- magnon condensation (1)
- magnon-phonon interactions (1)
- phase transitions (1)
- quantum chromodynamics (1)
- yttrium-iron garnet (1)
Institute
- Physik (20) (remove)
Great interest has emerged recently in the search for Kitaev spin liquid states in real materials. Such states rely on strongly anisotropic magnetic interactions, which have been suggested to exist in a number of candidate materials based on Ir and Ru. This thesis concentrates on two priority purposes. The first is the investigation of electronic and magnetic properties of candidate materials Na2IrO3, α-Li2IrO3, α-RuCl3, γ-Li2IrO3, and Ba3YIr2O9 for Kitaev physics where both spin-orbit coupling and correlation effects are important. The second is the method development for the microscopic description of correlated materials combining many-body methods and density functional theory (DFT). ...
At sufficiently high temperatures and baryon densities, nuclear matter is expected to undergo a transition into the Quark-Gluon-Plasma (QGP) consisting of deconfined quarks and gluons and accompanied by chiral symmetry restoration. Signals of these two fundamental characteristics of Quantum-Chromo-Dynamics (QCD) can be studied in ultra-relativistic heavy-ion collisions producing a relatively large volume of high energy and nucleon densities as existent in the early universe. Dileptons are unique bulk-penetrating sources for this purpose since they penetrate through the surrounding medium with negligible interaction and are created throughout the entire evolution of the initially created fireball. A multitude of experiments at SIS18, SPS and RHIC have taken on the challenging task to measure these rare probes in a heavy-ion environment. NA60's results from high-quality dimuon measurements have identified the broadened ρ spectral function as favorable scenario to explain the low-mass dilepton excess, and partonic sources as dominant at intermediate dilepton masses.
Enabled by the addition of a TOF detector system in 2010, the first phase of the Beam Energy Scan (BES-I) at RHIC allows STAR to conduct an unprecedented energy-dependent study of dielectron production within a homogeneous experimental environment, and hence close the wide gap in the QCD phase diagram between SPS and top RHIC energies. This thesis concentrates on the understanding of the LMR enhancement regarding its invariant mass, transverse momentum and energy dependence. It studies dielectron production in Au+Au collisions at beam energies of 19.6, 27, 39, and 62.4 GeV with sufficient statistics. In conjunction with the published STAR results at top RHIC energy, this thesis presents results on the first comprehensive energy-dependent study of dielectron production.
This includes invariant mass- and transverse momenta-spectra for the four beam energies measured in 0-80% minimum-bias Au+Au collisions with high statistics up to 3.5 GeV/c² and 2.2 GeV/c, respectively. Their comparison with cocktail simulations of hadronic sources reveals a sizeable and steadily increasing excess yield in the LMR at all beam energies. The scenario of broadened in-medium ρ spectral functions proves to not only serve well as dominating underlying source but also to be universal in nature since it quantitatively and qualitatively explains the LMR enhancements measured over the wide range from SPS to top RHIC energies. It shows that most of the enhancement is governed by interactions of the ρ meson with thermal resonance excitations in the late(r)-stage hot and dense hadronic phase. This conclusion is supported by the energy-dependent measurement of integrated LMR excess yields and enhancement factors. The former do not exhibit a strong dependence on beam energy as expected from the approximately constant total baryon density above 20 GeV, and the latter show agreement with the CERES measurement at SPS energy. The consistency in excess yields and agreement with model calculations over the wide RHIC energy regime makes a strong case for LMR enhancements on the order of a factor 2-3.
The extent of the results presented here enables a more solid discussion of its relation to chiral symmetry restoration from a theoretical point of view. High-statistics measurements at BES-II hold the promise to confirm these conclusions along with the LMR enhancment's relation to total baryon density with decreasing beam energy.
The term superconductivity describes the phenomenon of vanishing electrical resistivity in a certain material, then called a superconductor, below a critical typically very low temperature. Since the discovery of superconductivity in mercury in 1911 many other superconductors have been found and the critical temperature below which superconductivity occurs could recently be raised to the temperatures encountered in a cold antarctic winter.
Superconductors are promising materials for applications. They can serve as nearly loss-free cables for energy transmission, in coils for the generation of high magnetic fields or in various electronic devices, such as detectors for magnetic fields. Despite their obvious advantages, the cost for using superconductors, however, depends a lot on the cooling effort needed to realize the superconducting state. Therefore, the search for a superconductor with critical temperature above room-temperature, which would avoid the need for any specialized cooling system, is one of the main projects of contemporary research in condensed matter physics.
While a theory of superconductivity in simple metals has already been developed in the 1950s, it has meanwhile been recognized that many superconductors are unconventional in the sense that their behavior does not follow the aforementioned theory. Unconventional superconductors differ from conventional superconductors mainly by the momentum- and real-space symmetry of the order parameter, which is associated with the superconducting state. While conventional superconductors have a uniform order parameter, unconventional superconductors can have an order parameter that bears structure. Of course, alternative theoretical descriptions have been suggested, but the discussion on the right theory for unconventional superconductivity has not yet been settled. Ultimately, this lack of a general theory of superconductivity prevents a targeted search for the room-temperature superconductor. Any new theoretical approach must, however, prove its value by correctly predicting the structure of the superconducting order parameter and further material properties.
In this work we participate in the search for a theory of unconventional superconductivity. We discuss the theory of superconductivity mediated by electron-electron interactions, which has been popular in the last few decades due to its success in explaining various properties of the copper-based superconductors that emerged in the 1980s. We give a detailed derivation of the so-called random phase approximation for the Hubbard model in terms of a diagrammatic many-body theory and apply it in conjunction with low-energy kinetic Hamiltonians, which we construct from first principles calculations in the framework of density functional theory. Density functional theory is an established technique for calculating the electronic and magnetic properties of materials solely based on their crystal structure. Its practical implementations in computer codes, however, do for example not describe complicated many-electron phenomena like the superconducting state that we are interested in here. Nevertheless, it can provide important information about the properties of the normal state of the material, which superconductivity emerges from. In our theory we use these information and approach the superconducting state from the normal state.
Such an interfacing of different calculational techniques requires a lot of implementation work in the form of computer code. Inclusion of the computer code into this work would consume by far too much space, but since some of the decisions on approximations in the calculational formalism are guided by the feasibility of the associated computer calculations, we discuss the numerical implementation in great detail.
We apply the developed methods to quasi-two-dimensional organic charge transfer salts and iron-based superconductors. Finally, we discuss implications of our findings for the interpretation of various experiments.
In this thesis we explore the characteristics of strongly interacting matter, described by Quantum Chromodynamics (QCD). In particular, we investigate the properties of QCD at extreme densities, a region yet to be explored by first principle methods. We base the study on lattice gauge theory with Wilson fermions in the strong coupling, heavy quark regime. We expand the lattice action around this limit, and carry out analytic integrals over the gauge links to obtain an effective, dimensionally reduced, theory of Polyakov loop interactions.
The 3D effective theory suffers only from a mild sign problem, and we briefly outline how it can be simulated using either Monte Carlo techniques with reweighting, or the Complex Langevin flow. We then continue to the main topic of the thesis, namely the analytic treatment of the effective theory. We introduce the linked cluster expansion, a method ideal for studying thermodynamic expansions. The complex nature of the effective theory action requires the development of a generalisation of the linked cluster expansion. We find a mapping between generalised linked cluster expansion and our effective theory, and use this to compute the thermodynamic quantities.
Lastly, various resummation techniques are explored, and a chain resummation is implemented on the level of the effective theory itself. The resummed effective theory describes not only nearest neighbour, next to nearest neighbour, and so on, interactions, but couplings at all distances, making it well suited for describing macroscopic effects. We compute the equation of state for cold and dense heavy QCD, and find a correspondence with that of non-relativistic free fermions, indicating a shift of the dynamics in the continuum.
We conclude this thesis by presenting two possible extensions to new physics using the techniques outlined within. First is the application of the effective theory in the large-$N_c$ limit, of particular interest to the study of conformal field theory. Second is the computation of analytic Yang Lee zeros, which can be applied in the search for real phase transitions.
In den vergangen Jahren wurde erkannt, dass eine Quantenfeldtheorie (QFT) namens Quantenchromodynamik (QCD) die richtige Theorie der starken Wechselwirkungen ist. QCD beschreibt erfolgreich die starken Wechselwirkungen, die Quarks zu Nukleonen und Nukleonen zu Atomkernen zusammenbinden. Jedoch ist die theoretische Beschreibung vieler Phänomene der starken Wechselwirkung aufgrund des starken Kopplungsverhaltens bei niedrigen Energien schwierig. Stoßexperimente mit Schwerionen sind ein möglicher Weg, um die charakteristischen Phänomene und Eigenschaften der QCD-Materie zu untersuchen. In Stoßexperimenten mit Schwerionen werden schwere (d.h. große) Atomkerne aufeinander geschossen, beispielsweise Gold (am RHIC) oder Blei (am CERN, LHC), mit einer ultrarelativistischen Energie √s im Schwerpunktsystem. Auf diese Art ist es möglich, eine große Menge von Materie mit hoher Energiedichte hervorzubringen. Das Ziel von Schwerionenkollisionen ist die Erzeugung und Charakterisierung einer makroskopischen Phase von freien Quarks und Gluonen im lokalen thermischen Gleichgewicht. Ein solcher Aggregatzustand kann neue Informationen über das QCD-Phasendiagramm und den QCD-Phasenübergang liefern. Man nimmt an, dass ein solcher Übergang stattfand, als sich die Materie des frühen Universums von einem Plasma aus Quarks und Gluonen (QGP) in ein Gas von Hadronen umwandelte...
Different approaches are possible when it comes to modeling the brain. Given its biological nature, models can be constructed out of the chemical and biological building blocks known to be at play in the brain, formulating a given mechanism in terms of the basic interactions underlying it. On the other hand, the functions of the brain can be described in a more general or macroscopic way, in terms of desirable goals. This goals may include reducing metabolic costs, being stable or robust, or being efficient in computational terms. Synaptic plasticity, that is, the study of how the connections between neurons evolve in time, is no exception to this. In the following work we formulate (and study the properties of) synaptic plasticity models, employing two complementary approaches: a top-down approach, deriving a learning rule from a guiding principle for rate-encoding neurons, and a bottom-up approach, where a simple yet biophysical rule for time-dependent plasticity is constructed.
We begin this thesis with a general overview, in Chapter 1, of the properties of neurons and their connections, clarifying notations and the jargon of the field. These will be our building blocks and will also determine the constrains we need to respect when formulating our models. We will discuss the present challenges of computational neuroscience, as well as the role of physicists in this line of research.
In Chapters 2 and 3, we develop and study a local online Hebbian self-limiting synaptic plasticity rule, employing the mentioned top-down approach. Firstly, in Chapter 2 we formulate the stationarity principle of statistical learning, in terms of the Fisher information of the output probability distribution with respect to the synaptic weights. To ensure that the learning rules are formulated in terms of information locally available to a synapse, we employ the local synapse extension to the one dimensional Fisher information. Once the objective function has been defined, we derive an online synaptic plasticity rule via stochastic gradient descent.
In order to test the computational capabilities of a neuron evolving according to this rule (combined with a preexisting intrinsic plasticity rule), we perform a series of numerical experiments, training the neuron with different input distributions.
We observe that, for input distributions closely resembling a multivariate normal distribution, the neuron robustly selects the first principal component of the distribution, showing otherwise a strong preference for directions of large negative excess kurtosis.
In Chapter 3 we study the robustness of the learning rule derived in Chapter 2 with respect to variations in the neural model’s transfer function. In particular, we find an equivalent cubic form of the rule which, given its functional simplicity, permits to analytically compute the attractors (stationary solutions) of the learning procedure, as a function of the statistical moments of the input distribution. In this way, we manage to explain the numerical findings of Chapter 2 analytically, and formulate a prediction: if the neuron is selective to non-Gaussian input directions, it should be suitable for applications to independent component analysis. We close this section by showing how indeed, a neuron operating under these rules can learn the independent components in the non-linear bars problem.
A simple biophysical model for time-dependent plasticity (STDP) is developed in Chapter 4. The model is formulated in terms of two decaying traces present in the synapse, namely the fraction of activated NMDA receptors and the calcium concentration, which serve as clocks, measuring the time of pre- and postsynaptic spikes. While constructed in terms of the key biological elements thought to be involved in the process, we have kept the functional dependencies of the variables as simple as possible to allow for analytic tractability. Despite its simplicity, the model is able to reproduce several experimental results, including the typical pairwise STDP curve and triplet results, in both hippocampal culture and layer 2/3 cortical neurons. Thanks to the model’s functional simplicity, we are able to compute these results analytically, establishing a direct and transparent connection between the model’s internal parameters and the qualitative features of the results.
Finally, in order to make a connection to synaptic plasticity for rate encoding neural models, we train the synapse with Poisson uncorrelated pre- and postsynaptic spike trains and compute the expected synaptic weight change as a function of the frequencies of these spike trains. Interestingly, a Hebbian (in the rate encoding sense of the word) BCM-like behavior is recovered in this setup for hippocampal neurons, while dominating depression seems unavoidable for parameter configurations reproducing experimentally observed triplet nonlinearities in layer 2/3 cortical neurons. Potentiation can however be recovered in these neurons when correlations between pre- and postsynaptic spikes are present. We end this chapter by discussing the relation to existing experimental results, leaving open questions and predictions for future experiments.
A set of summary cards of the models employed, together with listings of the relevant variables and parameters, are presented at the end of the thesis, for easier access and permanent reference for the reader.
The Standard Model is one of the greatest successes of modern theoretical physics. Itl describes the physics of elementary particles by means of three forces, the electro-magnetisc, the weak and the strong interactions. The electro-magnetic and the weak interaction are rather well understood in comparison to the strong interaction.
The latest is as fundamental as the others, it is responsible for the formation of all hadrons which are classified into mesons and baryons. Well-known examples of the former is the pion and of the latter is the proton and the neutron, which form the nucleus of every atom. This fundamental force is believed to be described by the Quantum Chromodynamics (QCD) theory. According to this theory, hadrons are not elementary particles but are composed of quarks and gluons. The latter are the vector particles of the force and so are bosons of spin 1 and the former constitute the matter and are fermions with spin 1/2. To describe the interaction a new quantum number had to be introduced: the color charge which exists in three different types (blue, green and red). The name has not been chosen arbitrary as elements created from three quarks of different colors are colorless in the same way that mixing the three primary colors leads to white. However, experimentally no colored structure has ever been observed. The quarks and the gluons seem to be confined in colorless hadrons. This property of QCD is called confinement and results from a large coupling constant at low energy (or large distance). For high energy (or small distance), the perturbative analysis of QCD permits to establish the coupling constant to be small and quarks and gluons are almost free. This property is called asymptotic freedom. The possibility for QCD to describe both behaviors is one of its amazing characteristics. However, both phenomena are not well understood and one needs a method to study both the pertubative and the confining regime.
The only known method which fulfills the above criteria is Lattice QCD and more generally Lattice Quantum Field Theory (LQFT). It consists of a discretization of the spacetime and a formulation of QCD on a four-dimensional Euclidean spacetime grid of spacing a. In this way, the theory is naturally regularized and mathematically well-defined. On the other hand, the path integral formalism allows the theory to be treated as a Statistical Mechanics system which can be evaluated via a Markov chain Monte-Carlo algorithm. This method was first suggested by Wilson in 1974 [1] and shortly after Creutz performed the first numerical simulations of Yang-Mills theory [2] using a heath-bath Monte-Carlo algorithm. It appears that this method is extremely demanding in computational power. In its early days the method was criticized as the only feasible simulations involved non-physical values such as extremely large quark masses, large lattice spacing a and no dynamical quarks. With the progress of the computers and the appearance of the super-computer, the studies have come close to the physical point. But one still needs to deal with discrete space time and finite volume. Several techniques have been developed to estimate the infinite volume limit and the continuum limit. The smaller the lattice spacing and the larger the volume, the better the extrapolation to continuum and infinite volume limits is. The simulations are still very expensive and for the moment a typical length of the box is L ≈ 4fm and a ≈ 0.08fm. However, it has been realized simulating pure Yang-Mills theory and other lower dimensional models that the topology is freezing at small a [3]. It was also observed recently on full QCD simulations [4,5].
The typical lattice spacing for which this problem appears in QCD is a ≈ 0.05fm but this value depends on the quark mass used and on the algorithm. The freezing of topology leads to results which differ from physical results. Solving this issue is important for the future of LQCD [6]. Recently several methods to overcome the problem have been suggested, one of the most popular is the used of open boundary conditions [7] but this promising method has still its own issues, mainly the breaking of translation invariance.
The Large Hadron Collider (LHC) is the biggest and most powerful particle accelerator in the world, designed to collide two proton beams with particle momentum of 7 TeV/c each. The stored energy of 362MJ in each beam is sufficient to melt 500 kg of copper or to evaporate about 300 litre of water. An accidental release of even a small fraction of the beam energy can cause severe damage to accelerator equipment. Reliable machine protection systems are necessary to safely operate the accelerator complex. To design a machine protection system, it is essential to know the damage potential of the stored beam and the consequences in case of a failure. One (catastrophic) failure would be, if the entire beam is lost in the aperture due to a problem with the beam dumping system.
This thesis presents the simulation studies, results of a benchmarking experiment, and detailed target investigation, for this failure case. In the experiment, solid copper cylinders were irradiated with the 440GeV proton beam delivered by the Super Proton Synchrotron (SPS) at the High Radiation to Materials (HiRadMat) facility at CERN. The experiment confirmed the existence of the so-called hydrodynamic tunneling phenomenon for the first time. Detailed numerical simulations for particle-matter interaction with FLUKA, and with the two-dimensional hydrodynamic code, BIG2, were carried out. Excellent agreement was found between the experimental and the simulation results that validate predictions for the 7TeV beam of the LHC. The hydrodynamic tunneling effect is of considerable importance for the design of machine protection systems for accelerators with high stored beam energy. In addition, this thesis presents the first studies of the damage potential with beam parameters of the Future Circular Collider (FCC).
To detect beam losses due to fast failures it is essential to have fast beam instrumentation. Diamond based particle detectors are able to detect beam losses within a nanosecond time scale. Specially designed diamond detectors were used in the experiment mentioned above. Their efficiency and response has been studied for the first time over 5 orders of bunch intensity with electrons at the Beam Test Facility (BTF) at INFN, Frascati, Italy. The results of these measurements are discussed in this thesis. Furthermore an overview of the applications of diamond based particle detectors in damage experiments and for LHC operation is presented.
Lepton pairs emerging from decays of virtual photons represent promising probes of nuclear matter under extreme conditions of temperature and density. These etreme conditions can be reached in heavy-ion collisions in various facilities around the world. Hereby the collision energy in the center-of-mass system (√SNN) varies from few GeV (SIS) to the TeV (LHC). In the energy domain of 1 - 2 GeV per nucleon (GeV/u), the HADES experiment at GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt studies dielectrons and strangeness production.
Various reactions, for example collisions of pions, protons, deuterons and heavy-ions with nuclei have been studied since its installation in the year 2001. Hereby the so called DLS Puzzle was solved experimentally, with remeasuring C+C at 1 and 2 GeV/u and by careful studies of inclusive pp and pn reactions at 1.25 GeV. With these measurements the so-called reference spectrum was established. Measurements of e+ e− production Ar+KCl showed an enhancement on the dilepton spectrum above the trivial NN back-
ground. Theory predicts a strong enhancement of medium radiation with the system size, due to large production of fast decaying baryonic resonances like ∆ and N∗ . The heaviest system measured so far was Au+Au at a kinetic beam energy of 1.23 GeV/u. The precise determination of the medium radiation depends
on a precise knowledge of the underlying hadronic cocktail composed of various sources contributing to the measured dilepton spectrum. In general the medium radiation needs to be separated from contributions coming from long-lived particles, that decay after the freeze out of the system. For a more model independent
understanding of the dilepton cocktail the production cross sections of these particles need to measured independently. In the related energy regime the main contributers are π0 and η Dalitz decays. Both mesons have a dominant decay into two real photons and have been reconstructed successfully in this channel. Since HADES has no electromagnetic calorimeter the mesons can not be identified in this decay channel directly. In this thesis the capability of HADES to detect e+ e− pairs from conversions of real photons is demonstrated.
Therefore not only the conversion probability but also the resulting efficiencies are shown. Furthermore, the reconstruction method for neutral mesons will be explained and the resulting spectra are interpreted. The measurement of neutral pions is compared to the independent measured charged pion distribution, and
extrapolated to full phase space. An integrated approach is used to determine the η yield. Both measurement are compared to the world data and to theory model claculations. Finally, the measurements will be used together with the reconstructed dilepton spectra to determine the amount and the properties of in medium radiation in the Au+Au system.
In der Experimentierhalle der Physik am Campus Riedberg der Goethe – Universität wird gegenwärtig die Beschleunigeranlage FRANZ aufgebaut. FRANZ steht für Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum. Die Anlage bietet vielfältige Experimentiermöglichkeiten in der Untersuchung intensiver, gepulster Protonenstrahlen. Ein Forschungsschwerpunkt an den sekundären Neutronenstrahlen sind Messungen zur nuklearen
Astrophysik. Die Neutronen werden durch einen 2 MeV Protonenstrahl mittels der Reaktion 7Li (p, n) 7Be erzeugt. Die geplanten Experimente erfordern sowohl eine hier weltweit erstmals realisierte Pulsrepetitionsrate von bis zu 250 kHz bei Pulsströmen im 100 mA – Bereich als auch eine extreme Pulskompression auf eine Nanosekunde bei dann auftretenden Pulsströmen im Ampere – Bereich. Daneben ist auch ein Dauerstrich – Strahlbetrieb im mA – Strombereich möglich. Auch viele einzelne Beschleunigerkomponenten wie die Ionenquelle, der Chopper zur Pulsformung, die hochfrequent gekoppelte RFQ-IH-Kombination, der Rebuncher in Form einer CH – Struktur und der Bunchkompressor sind Neuentwicklungen. Mittlere Strahlleistungen von bis zu 24 kW treten im Niederenergiestrahltransportbereich auf, da die Ionenquelle grundsätzlich im Dauerstrich zu betreiben ist, auch bei Hochstrom mit hohen Pulsrepetitionsraten. Der Personen- und Geräteschutz spielt damit auch eine wesentliche Rolle bei der Auslegung des Kontrollsystems für FRANZ. Der Aufbau von FRANZ und seine wesentlichen Komponenten werden in Kapitel 2 erläutert. Die vielen unterschiedlichen Komponenten wie Hochspannungsbereich, Magneten, Hochfrequenzbauteile und Kavitäten, Vakuumbauteile, Strahldiagnose und Detektoren machen plausibel, dass auch das Kontrollsystem für eine solche Anlage speziell ausgelegt werden muss. In Kapitel 4 werden zum Vergleich die Konzepte zur Steuerung und Regelung aktueller, großer Beschleunigerprojekte aufgezeigt, nämlich für die „European Spallation Source ESS“ und für die „Facility for Antiproton and Ion Research FAIR“. In der vorliegenden Arbeit wurde die Ionenquelle als komplexe Beschleunigerkomponente ausgewählt, um Entwicklungen zur Steuerung und Regelung durchzuführen und zu testen. Zum Anfahren und Betreiben der Ionenquelle wurde ein Flussdiagramm (Abb. 5.15) entwickelt und realisiert. Im Detail wurden Untersuchungen zur Abhängigkeit der Heizkathodenparameter von der Betriebsdauer gemacht. Daraus konnte ein Algorithmus zur Vorhersage eines rechtzeitigen Filamentaustausches abgeleitet werden. Weiterhin konnte die Nachregelung des Kathodenheizstromes automatisiert werden, um damit die Bogenentladungsspannung innerhalb eines Intervalls von ± 0.5 V zu stabilisieren. Das Anfahren des Filamentstroms wurde ebenfalls automatisiert. Dazu wird die Vakuumdruckänderung in Abhängigkeit der Filamentstromerhöhung gemessen, ausgewertet und daraus der nächste erlaubte Stromerhöhungsschritt abgeleitet. Auf diese Weise wird der Betriebszustand schneller und kontrollierter erreicht als bei manuellem Hochfahren. Das Ziel eines unbemannten Ionenquellenbetriebs ist damit näher gerückt. In einem ersten Test zur Komponentensteuerung und zur Datenaufnahme wurde ein Ionenstrahl extrahiert und durch den ersten Fokussierungsmagneten – einen Solenoiden – transportiert. Es wurde der Erregungsstrom des Solenoiden sowie die Strahlenergie automatisch durchgefahren, die Daten abgespeichert und daraus ein Kontourplot zum gemessenen Strahlstrom hinter der Fokussierlinse erstellt (Abb. 5). Die vorliegende Arbeit beschäftigt sich nur mit den „langsamen“ Steuerungs- und Regelungsprozessen, während die schnellen Prozesse im Hochfrequenzregelungssystem unabhängig geregelt werden. Neben der Überwachung des Betriebszustandes aller Komponenten werden auch alle für den Service und die Personensicherheit benötigten Daten weggeschrieben. Das System basiert auf MNDACS (Mesh Networked Data Acquisition and Control System) und ist in JAVA geschrieben. MNDACS besteht aus einem Kernel, welcher die Komponententreiber-Software sowie den Netzwerkserver und das graphische Netzwerkinterface (GUI) betreibt. Weterhin gehört dazu das Driver Abstraction Layer (DAL), welches den Zugang zu weiteren Computern oder zu lokalen Treibern ermöglicht. CORBA stellt die Middleware für Netzwerkkommunikation dar. Dadurch wird Kommunikation mit externer Software geregelt, weiterhin wird die Umlegung von Kommunikation im Fall von Leitungsunterbrechungen oder einem lokalen Computerabsturz festgelegt. Es gibt bei FRANZ zwei Kontrollebenen: Über Ethernet läuft die „High Level Control“ und die Datenverarbeitung. Über die „Low Level Control“ läuft das Interlock – und Sicherheitssystem. Die Netzwerkverbindungen laufen über 1 Gb Ethernet Links, womit ein schneller Austausch auch bei lokalen Netzwerkstörungen noch möglich ist. Um bei Stromausfällen das Computersystem am Laufen zu halten, wurde im Rahmen dieser Arbeit ein „Uninterruptable Power Supply“ UPS beschafft und erfolgreich am Hochspannungsterminal getestet.