Refine
Year of publication
Document Type
- Doctoral Thesis (19)
- Master's Thesis (1)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
- FPGA (2)
- Organic Computing (2)
- ALICE (1)
- Abstraction (1)
- Affymetrix (1)
- Agent (1)
- Analog (1)
- Analog Circuits (1)
- Analog Verification (1)
- Analoges System (1)
Institute
- Informatik (13)
- Informatik und Mathematik (6)
- Frankfurt Institute for Advanced Studies (FIAS) (1)
- Mathematik (1)
- Physik (1)
The objective of this thesis is to develop new methodologies for formal verification of nonlinear analog circuits. Therefore, new approaches to discrete modeling of analog circuits, specification of analog circuit properties and formal verification algorithms are introduced. Formal approaches to verification of analog circuits are not yet introduced into industrial design flows and still subject to research. Formal verification proves specification conformance for all possible input conditions and all possible internal states of a circuit. Automatically proving that a model of the circuit satisfies a declarative machine-readable property specification is referred to as model checking. Equivalence checking proves the equivalence of two circuit implementations. Starting from the state of the art in modeling analog circuits for simulation-based verification, discrete modeling of analog circuits for state space-based formal verification methodologies is motivated in this thesis. In order to improve the discrete modeling of analog circuits, a new trajectory-directed partitioning algorithm was developed in the scope of this thesis. This new approach determines the partitioning of the state space parallel or orthogonal to the trajectories of the state space dynamics. Therewith, a high accuracy of the successor relation is achieved in combination with a lower number of states necessary for a discrete model of equal accuracy compared to the state-of-the-art hyperbox-approach. The mapping of the partitioning to a discrete analog transition structure (DATS) enables the application of formal verification algorithms. By analyzing digital specification concepts and the existing approaches to analog property specification, the requirements for a new specification language for analog properties have been discussed in this thesis. On the one hand, it shall meet the requirements for formal specification of verification approaches applied to DATS models. On the other hand, the language syntax shall be oriented on natural language phrases. By synthesis of these requirements, the analog specification language (ASL) was developed in the scope of this thesis. The verification algorithms for model checking, that were developed in combination with ASL for application to DATS models generated with the new trajectory-directed approach, offer a significant enhancement compared to the state of the art. In order to prepare a transition of signal-based to state space-based verification methodologies, an approach to transfer transient simulation results from non-formal test bench simulation flows into a partial state space representation in form of a DATS has been developed in the scope of this thesis. As has been demonstrated by examples, the same ASL specification that was developed for formal model checking on complete discrete models could be evaluated without modifications on transient simulation waveforms. An approach to counterexample generation for the formal ASL model checking methodology offers to generate transition sequences from a defined starting state to a specification-violating state for inspection in transient simulation environments. Based on this counterexample generation, a new formal verification methodology using complete state space-covering input stimuli was developed. By conducting a transient simulation with these complete state space-covering input stimuli, the circuit adopts every state and transition that were visited during stimulus generation. An alternative formal verification methodology is given by retransferring the transient simulation responses to a DATS model and by applying the ASL verification algorithms in combination with an ASL property specification. Moreover, the complete state space-covering input stimuli can be applied to develop a formal equivalence checking methodology. Therewith, the equivalence of two implementations can be proven for every inner state of both systems by comparing the transient simulation responses to the complete-coverage stimuli of both circuits. In order to visually inspect the results of the newly introduced verification methodologies, an approach to dynamic state space visualization using multi-parallel particle simulation was developed. Due to the particles being randomly distributed over the complete state space and moving corresponding to the state space dynamics, another perspective to the system's behavior is provided that covers the state space and hence offers formal results. The prototypic implementations of the formal verification methodologies developed in the scope of this thesis have been applied to several example circuits. The acquired results for the new approaches to discrete modeling, specification and verification algorithms all demonstrate the capability of the new verification methodologies to be applied to complex circuit blocks and their properties.
The requirement of the versatile signal generator has always been evident in modern RF and communication systems. The most conventional technique, voltage control oscillator (VCO), has inferior phase noise and narrow bandwidth despite its operating frequency can be up to the sub-THz regime. Its phase noise influenced by a various parameter associated with the oscillator circuit e.g. transistor size \& noise, bias current, noise leaking from the bias supply etc. The bandwidth is limited because the input voltage \& the output frequency of the VCO is not strictly linear over the tuning range. The phase noise and SFDR of the VCO output are enhanced by using the phase-lock technique. The phase-locked loop (PLL) uses the feedback system locking the reference frequency set by the VCO. However, the settling time of the PLL is higher due to a feedback control loop. The higher settling time increases the frequency switching time between PLL outputs. IG-oscillators is suitable for multi-GHz range and wide bandwidth application. Signal generation can alos be achieved by the free-electron radiation, optical lasers, Gunn diodes as well and they can operate even at the THz domain. All these signal generators suffer from slow frequency switching, lack of digital controllability, and advance modulation capability even though their frequency of operation is THz regime. Alternatively, the AWG (arbitrary wave generator) can produce a wide range of frequencies with low phase noise, including digital controllability. One of the vital components of the AWG is the direct digital synthesiser (DDS). Generally, it is composed of a phase accumulator, digital to analogue converter, sine mapping circuits and low pass filter. It needs a reference clock that acts as samples of the DDS outputs. Its output frequency can be varied by applying an appropriate digital input code. But high-speed DDS has several limitations; such as low number of output frequency points, lack of phase control unit, high power consumptions etc. This work addresses such limitations.
In dieser Arbeit wird die Verteilung von zeitlich abhängigen Tasks in einem verteilten System unter den Gesichtspunkten des Organic Computing untersucht. Sie leistet Beiträge zur Theorie des Schedulings und zur selbstorganisierenden Verteilung solcher abhängiger Tasks unter Echtzeitbedingungen. Die Arbeit ist in zwei Teile gegliedert: Im ersten Teil werden Tasks als sogenannte Pfade modelliert, welche aus einer festen Folge von Aufträgen bestehen. Dabei muss ein Pfad ununterbrechbar auf einer Ressource ausgeführt werden und die Reihenfolge seiner Aufträge muss eingehalten werden. Natürlich kann es auch zeitliche Abhängigkeiten zwischen Aufträgen verschiedener Pfade geben. Daraus resultiert die Frage, ob ein gegebenes System S von Pfaden mit seinen Abhängigkeiten überhaupt ausführbar ist: Dies ist genau dann der Fall wenn die aus den Abhängigkeiten zwischen den Aufträgen resultierende Relation <A irreflexiv ist. Weiterhin muss für ein ausführbares System von Pfaden geklärt werden, wie ein konkreter Ausführungsplan aussieht. Zu diesem Zweck wird eine weitere Relation < auf den Pfaden eingeführt. Falls < auf ihnen irreflexiv ist, so kann man eine Totalordnung auf ihnen erzeugen und erhält somit einen Ausführungsplan. Anderenfalls existieren Zyklen von Pfaden bezüglich der Relation <. In der Arbeit wird weiterhin untersucht, wie man diese isoliert und auf einem transformierten Pfadsystem eine Totalordnung und damit einen Ausführungsplan erstellt. Die Größe der Zyklen von Pfaden bezüglich < ist der wichtigste Parameter für die Anzahl der Ressourcen, die für die Ausführung eines Systems benötigt werden. Deshalb wird in der Arbeit ebenfalls ausführlich untersucht, ob und wie man Zyklen anordnen kann, um die Ressourcenzahl zu verkleinern und somit den Ressourcenaufwand zu optimieren. Dabei werden zwei Ideen verfolgt: Erstens kann eine Bibliothek erstellt werden, in der generische Zyklen zusammen mit ihren Optimierungen vorliegen. Die zweite Idee greift, wenn in der Bibliothek keine passenden Einträge gefunden werden können: Hier erfolgt eine zufällige oder auf einer Heuristik basierende Anordnung mit dem Ziel, den Ressourcenaufwand zu optimieren. Basierend auf den theoretischen Betrachtungen werden Algorithmen entwickelt und es werden Zeitschranken für ihre Ausführung angegeben. Da auch die Ausführungszeit eines Pfadsystems wichtig ist, werden zwei Rekursionen angegeben und untersucht. Diese schätzen die Gesamtausführungszeit unter der Bedingung ab, dass keine Störungen an den Ressourcen auftreten können. Die Verteilung der Pfade auf Ressourcen wird im zweiten Teil der Arbeit untersucht. Zunächst wird ein künstliches Hormonsystems (KHS) vorgestellt, welches eine Verteilung unter Berücksichtigung der Eigenschaften des Organic Computing leistet. Es werden zwei Alternativen untersucht: Im ersten Ansatz, dem einstufigen KHS, werden die Pfade eines Systems direkt durch das KHS auf die Ressourcen zu Ausführung verteilt. Zusätzlich werden Mechanismen zur Begrenzung der Übernahmehäufigkeit der Pfade auf den Ressourcen und ein Terminierungs-mechanismus entwickelt. Im zweiten Ansatz, dem zweistufigen KHS, werden durch das KHS zunächst Ressourcen exklusiv für Klassen von Pfaden reserviert. Dann werden die Pfade des Systems auf genau den reservierten Ressourcen vergeben, so dass eine Ausführung ohne Wechselwirkung zwischen Pfaden verschiedener Klassen ermöglicht wird. Auch hierfür werden Methoden zur Beschränkung der Übernahmehäufigkeiten und Terminierung geschaffen. Für die Verteilung und Terminierung von Pfaden durch das einstufige oder zweistufige KHS können Zeitschranken angegeben werden, so dass auch harte Echtzeitschranken eingehalten werden können. Zum Schluss werden beide Ansätze mit verschiedenen Benchmarks evaluiert und ihre Leistungsfähigkeit demonstriert. Es zeigt sich, dass der erste Ansatz für einen Nutzer einfacher zu handhaben ist, da die benötigten Parameter sehr leicht berechnet werden können. Der zweite Ansatz ist sehr gut geeignet, wenn eine geringe Anzahl von Ressourcen vorhanden ist und die Pfade verschiedener Klassen möglichst unabhängig voneinander laufen sollen. Fazit: Durch die in dieser Arbeit gewonnenen Erkenntnisse ist jetzt möglich, mit echtzeitfähigen Algorithmen die Ausführbarkeit von zeitlich abhängigen Tasks zu untersuchen und den Ressourcenaufwand für ihre Ausführung zu optimieren. Weiterhin werden zwei verschiedene Ansätze eines künstlichen Hormonsystems zur Allokation solcher Tasks in einem verteilten System bereit gestellt, die ihre Stärken unter jeweils verschiedenen Randbedingungen voll entfalten und somit ein breites Anwendungsfeld abdecken. Für den Rechenzeitaufwand beider Ansätze können Schranken angegeben werden, was sie für den Einsatz in Echtzeitsystemen qualifiziert.
Detectors of modern high-energy physics experiments generate huge data rates during operation. The efficient read-out of this data from the front-end electronics is a sophisticated task, the main challenges, however, may vary from experiment to experiment. The Compressed Baryonic Matter (CBM) experiment that is currently under construction at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt/Germany foresees a novel approach for data acquisition.
Unlike previous comparable experiments that organize data read-out based on global, hierarchical trigger decisions, CBM is based on free-running and self-triggered front-end electronics. Data is pushed to the next stage of the read-out chain rather than pulled from the buffers of the previous stage. This new paradigm requires a completely new development of read-out electronics.
As one part of this thesis, a firmware for a read-out controller to interface such a free-running and self-triggered front-end ASIC, the GET4 chip, was implemented. The firmware in question was developed to run on a Field Programmable Gate Array (FPGA). An FPGA is an integrated circuit whose behavior can be reconfigured "in the field" which offers a lot of flexibility, bugs can be fixed and also completely new features can be added, even after the hardware has already been installed. Due to these general advantages, the usage of FPGAs is desired for the final experiment. However, there is also a drawback to the usage of FPGAs. The only affordable FPGAs today are based on either SRAM or Flash technology and both cannot easily be operated in a radiation environment.
SRAM-based devices suffer severely from Single Event Upsets (SEUs) and Flash-based FPGAs deteriorate too fast from Total Ionizing Dose (TID) effects.
Several radiation mitigation techniques exist for SRAM-based FPGAs, but careful evaluation for each use case is required. For CBM it is not clear if the higher resource consumption of added redundancy, that more or less directly translates in to additional cost, outweighs the advantaged of using FPGAs. In addition, it is even not clear if radiation mitigation techniques (e.g. scrubbing) that were already successfully put into operation in space applications also work as efficiently at the much higher particle rates expected at CBM.
In this thesis, existing radiation mitigation techniques have been analyzed and eligible techniques have been implemented for the above-mentioned read-out controller. To minimize additional costs, redundancy was only implemented for selected parts of the design.
Finally, the radiation mitigated read-out controller was tested by mounting the device directly into a particle beam at Forschungszentrum Jülich. The tests show that the radiation mitigation effect of the implemented techniques remains sound, even at a very high particle flux and with only part of the design protected by costly redundancy.
The promising results of the in-beam tests suggest to use FPGAs in the read-out chain of the CBM-ToF detector.
Die vorliegende Arbeit stellt ein organisches Taskverarbeitungssystem vor, das die zuverlässige Verwaltung und Verarbeitung von Tasks auf Multi-Core basierten SoC-Architekturen umsetzt. Aufgrund der zunehmenden Integrationsdichte treten bei der planaren Halbleiter-Fertigung vermehrt Nebeneffekte auf, die im Systembetrieb zu Fehler und Ausfällen von Komponenten führen, was die Zuverlässigkeit der SoCs zunehmend beeinträchtigt. Bereits ab einer Fertigungsgröße von weniger als 100 nm ist eine drastische Zunahme von Elektromigration und der Strahlungssensitivität zu beobachten. Gleichzeitig nimmt die Komplexität (Applikations-Anforderungen) weiter zu, wobei der aktuelle Trend auf eine immer stärkere Vernetzung von Geräten abzielt (Ubiquitäre Systeme). Um diese Herausforderungen autonom bewältigen zu können, wird in dieser Arbeit ein biologisch inspiriertes Systemkonzept vorgestellt. Dieses bedient sich der Eigenschaften und Techniken des menschlichen endokrinen Hormonsystems und setzt ein vollständig dezentrales Funktionsprinzip mit Selbst-X Eigenschaften aus dem Organic Computing Bereich um. Die Durchführung dieses organischen Funktionsprinzips erfolgt in zwei getrennten Regelkreisen, die gemeinsam die dezentrale Verwaltung und Verarbeitung von Tasks übernehmen. Der erste Regelkreis wird durch das künstliche Hormonsystem (KHS) abgebildet und führt die Verteilung aller Tasks auf die verfügbaren Kerne durch. Die Verteilung erfolgt durch das Mitwirken aller Kerne und berücksichtigt deren lokale Eignung und aktueller Zustand. Anschließend erfolgt die Synchronisation mit dem zweiten Regelkreis, der durch die hormongeregelte Taskverarbeitung (HTV) abgebildet wird und einen dynamischen Task-Transfer gemäß der aktuellen Verteilung vollzieht. Dabei werden auch die im Netz verfügbaren Zustände von Tasks berücksichtigt und es entsteht ein vollständiger Verarbeitungspfad, ausgehend von der initialen Taskzuordnung, hinweg über den Transfer der Taskkomponenten, gefolgt von der Erzeugung der lokalen Taskinstanz bis zum Start des zugehörigen Taskprozesses auf dem jeweiligen Kern. Die System-Implementierung setzt sich aus modularen Hardware- und Software-Komponenten zusammen. Dadurch kann das System entweder vollständig in Hardware, Software oder in hybrider Form betrieben und genutzt werden. Mittels eines FPGA-basierten Prototyps konnten die formal bewiesenen Zeitschranken durch Messungen in realer Systemumgebung bestätigt werden. Die Messergebnisse zeigen herausragende Zeitschranken bezüglich der Selbst-X Eigenschaften. Des Weiteren zeigt der quantitative Vergleich gegenüber anderen Systemen, dass der hier gewählte dezentrale Regelungsansatz bezüglich Ausfallsicherheit, Flächen- und Rechenaufwand deutlich überlegen ist.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
Programmable hardware in the form of FPGAs found its place in various high energy physics experiments over the past few decades. These devices provide highly parallel and fully configurable data transport, data formatting, and data processing capabilities with custom interfaces, even in rigid or constrained environments. Additionally, FPGA functionalities and the number of their logic resources have grown exponentially in the last few years, making FPGAs more and more suitable for complex data processing tasks. ALICE is one of the four main experiments at the LHC and specialized in the study of heavy-ion collisions. The readout chain of the ALICE detectors makes use of FPGAs at various places. The Read-Out Receiver Cards (RORCs) are one example of FPGA-based readout hardware, building the interface between the custom detector electronics and the commercial server nodes in the data processing clusters of the Data Acquisition (DAQ) system as well as the High Level Trigger (HLT). These boards are implemented as server plug-in cards with serial optical links towards the detectors. Experimental data is received via more than 500 optical links, already partly pre-processed in the FPGAs, and pushed towards the host machines. Computer clusters consisting of a few hundred nodes collect, aggregate, compress, reconstruct, and prepare the experimental data for permanent storage and later analysis. With the end of the first LHC run period in 2012 and the start of Run 2 in 2015, the DAQ and HLT systems were renewed and several detector components were upgraded for higher data rates and event rates. Increased detector link rates and obsolete host interfaces rendered it impossible to reuse the previous RORCs in Run 2.
This thesis describes the development, integration, and maintenance of the next generation of RORCs for ALICE in Run 2. A custom hardware platform, initially developed as a joint effort between the ALICE DAQ and HLT groups in the course of this work, found its place in the Run 2 readout systems of the ALICE and ATLAS experiments. The hardware fulfills all experiment requirements, matches its target performance, and has been running stable in the production systems since the start of Run 2. Firmware and software developments for the hardware evaluation, the design of the board, the mass production hardware tests, as well as the operation of the final board in the HLT, were carried out as part of this work. 74 boards were integrated into the HLT hardware and software infrastructure, with various firmware and software developments, to provide the main experimental data input and output interface of the HLT for Run 2. The hardware cluster finder, an FPGA-based data pre-processing core from the previous generation of RORCs, was ported to the new hardware. It has been improved and extended to meet the experimental requirements throughout Run 2. The throughput of this firmware component could be doubled and the algorithm extended, providing an improved noise rejection and an increased overall mean data compression ratio compared to its previous implementation. The hardware cluster finder forms a crucial component in the HLT data reconstruction and compression scheme with a processing performance of one board equivalent to around ten server nodes for comparable processing steps in software.
The work on the firmware development, especially on the hardware cluster finder, once more demonstrated that developing and maintaining data processing algorithms with the common low-level hardware description methods is tedious and time-consuming. Therefore, a high-level synthesis (HLS) hardware description method applying dataflow computing at an algorithmic level to FPGAs was evaluated in this context. The hardware cluster finder served as an example of a typical data processing algorithm in a high energy physics readout application. The existing and highly optimized low-level implementation provided a reference for comparisons in terms of throughput and resource usage. The cluster finder algorithm could be implemented in the dataflow description with comparably little effort, providing fast development cycles, compact code and at, the same time, simplified extension and maintenance options. The performance results in terms of throughput and resource usage are comparable to the manual implementation. The dataflow environment proved to be highly valuable for design space explorations. An integration of the dataflow description into the HLT firmware and software infrastructure could be demonstrated as a proof of concept. A high-level hardware description could ease both the design space exploration, the initial development, the maintenance, and the extension of hardware algorithms for high energy physics readout applications.
In this dissertation the formal abstraction and verification of analog circuit is examined. An approach is introduced that automatically abstracts a transistor level circuit with full Spice accuracy into a hybrid automaton (HA) in various output languages. The generated behavioral model exhibits a significant simulation speed-up compared to the original netlist, while maintaining an acceptable accuracy, and can be therefore used in various verification and validation routines. On top of that, the generated models can be formally verified against their Spice netlists, making the obtained models correct by construction.
The generated abstract models can be extended to enclose modeling as well as technology dependent parameter variations with little over approximations. As these models enclose the various behaviors of the sampled netlists, the obtained models are of significant importance as they can replace several simulations with just a single reachability analysis or symbolic simulation. Moreover, these models can be as well be used in different verification routines as demonstrated in this dissertation.
As the obtained models are described by HAs with linear behaviors in the locations, the abstract models can be as well compositionally linked, allowing thereby the abstraction of complex analog circuits.
Depending on the specified modeling settings, including for example the number of locations of the HA and the description of the system behavior, the accuracy, speedup, and various additional properties of the HA can be influenced. This is examined in detail in this dissertation. The underlying abstraction process is first covered in detail. Several extensions are then handled including the modeling of the HAs with parameter variations. The obtained models are then verified using various verification methodologies. The accuracy and speed-up of the abstraction methodology is finally evaluated on several transistor level circuits ranging from simple operational amplifiers up to a complex circuits.
Hierarchical self-organizing systems for task-allocation in large scaled distributed architectures
(2019)
This thesis deals with the subject of autonomous, decentralized task allocation in a large scaled multi-core network. The self-organization of such interconnected systems becomes more and more important for upcoming developments. It is to be expected that the complexity of those systems becomes hardly manageable to human users. Self-organization is part of a research field of the Organic Computing initiative, which aims to find solutions for technical systems by imitating natural systems and their processes. Within this initiative, a system for task allocation in a small scaled multi-core network was already developed, researched and published. The system is called the Artificial Hormone System (AHS), since it is inspired by the endocrine system of mammals. The AHS produces a high amount of communication load in case the multi-core network is of a bigger scale.
The contribution of this thesis is two new approaches, both based on the AHS in order to cope with large scaled architectures. The major idea of those two approaches is to introduce a hierarchy into the AHS in order to reduce the produced communication load. The first and more detailed researched approach is called the Hierarchical Artificial Hormone System (HAHS), which orders the processing elements in clusters and builds an additional communication layer between them. The second approach is the Recursive Artificial Hormone System (RAHS), which also clusters the system’s processing elements and orders the clusters into a topological tree structure for communication.
Both approaches will be explained in this thesis by their principle structure as well as some optional methods. Furthermore, this thesis presents estimations for the worst case timing behavior and the worst-case communication load of the HAHS and RAHS. At last, the evaluation results of both approaches, especially in comparison to the AHS, will be shown and discussed.