Refine
Document Type
- Doctoral Thesis (5)
- Master's Thesis (1)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
- Agent (1)
- Simulation (1)
- Verkehr (1)
- mikroskopisch (1)
- multimodal (1)
Institute
- Informatik und Mathematik (6) (remove)
Hierarchical self-organizing systems for task-allocation in large scaled distributed architectures
(2019)
This thesis deals with the subject of autonomous, decentralized task allocation in a large scaled multi-core network. The self-organization of such interconnected systems becomes more and more important for upcoming developments. It is to be expected that the complexity of those systems becomes hardly manageable to human users. Self-organization is part of a research field of the Organic Computing initiative, which aims to find solutions for technical systems by imitating natural systems and their processes. Within this initiative, a system for task allocation in a small scaled multi-core network was already developed, researched and published. The system is called the Artificial Hormone System (AHS), since it is inspired by the endocrine system of mammals. The AHS produces a high amount of communication load in case the multi-core network is of a bigger scale.
The contribution of this thesis is two new approaches, both based on the AHS in order to cope with large scaled architectures. The major idea of those two approaches is to introduce a hierarchy into the AHS in order to reduce the produced communication load. The first and more detailed researched approach is called the Hierarchical Artificial Hormone System (HAHS), which orders the processing elements in clusters and builds an additional communication layer between them. The second approach is the Recursive Artificial Hormone System (RAHS), which also clusters the system’s processing elements and orders the clusters into a topological tree structure for communication.
Both approaches will be explained in this thesis by their principle structure as well as some optional methods. Furthermore, this thesis presents estimations for the worst case timing behavior and the worst-case communication load of the HAHS and RAHS. At last, the evaluation results of both approaches, especially in comparison to the AHS, will be shown and discussed.
A Large Ion Collider Experiment (ALICE) is one of the four large experiments at the Large Hadron Collider (LHC) at the European Organization for Particle Physics (CERN). ALICE focuses on the physics of the strong interaction and in particular on the Quark-Gluon Plasma. This is a state of matter in which quarks are de-confined. It is believed that it existed in the earliest moments of the evolution of the universe. The ALICE detector studies the products of the collisions between heavy-nuclei, between protons, and between protons and heavy-nuclei. The sub-detector closest to the interaction point is the Inner Tracking System (ITS), which is used to measure the momentum and trajectory of the particles generated by the collisions and allows reconstructing primary and secondary interaction vertices. The ITS needs to have an accurate spatial resolution, together with a low material budget to limit the effect of multiple scattering on low-energetic particles to precisely reconstruct their trajectory. During the Long Shutdown 2 (2019-2020) of the LHC, the current ITS will be replaced by a completely redesigned sub-detector, which will improve readout rate and particle tracking performance especially at low-momentum.
The ALice PIxel DEtector (ALPIDE) chip was designed to meet the requirements of the upgraded ITS in terms of resolution, material budget, radiation hardness, and readout rate. The ALPIDE chip is a Monolithic Active Pixel Sensor (MAPS) realised in Complementary Metal-Oxide Semiconductor (CMOS) technology. Sensing element, analogue front-end, and its digital readout are integrated into the same silicon die. The readout architecture of the new ITS foresees that data is transmitted via a high-speed serial link directly from the ALPIDE to the off-detector electronics. The data is transmitted off-chip by a so-called Data Transmission Unit (DTU) which needs to be tolerant to Single-Event Effects induced by radiation, in order to guarantee reliable operation. The ALPIDE chip will operate in a radiation field with a High-Energy Hadron peak flux of 7.7·10^5 cm^-2s^-1.
The data are sent by the ALPIDE on copper cables to the readout system, which aggregates them and re-transmits them via optical fibres to the counting room. The position where the readout electronics will be placed is constrained by the maximum transmission distance reasonably achievable by the ALPIDE Data Transmission Unit and mechanical constraints of the ALICE experiment. The radiation field at that location is not negligible for its effects on electronics: the high-energy hadrons flux can reach 10^3 cm^-2s^-1. Static RAM (SRAM)-based Field Programmable Gate Arrays (FPGAs) are favoured over Application Specific Integrated Circuits (ASICs) or Radiation Hard by Design (RHBD) commercial devices because of cost effectiveness. Moreover, SRAM-based FPGAs are re-configurable and provide the data throughput required by the ITS. The main issue with SRAM-based FPGAs, for the intended application, is the susceptibility of their Configuration RAM (CRAM) to Single-Event Upsets: the number of CRAM bits is indeed much higher than the logic they configure. Total Ionizing Dose (TID) at the readout designed position is indeed still acceptable for Component Off The Shelf (COTS), provided that proper verification is carried out.
This dissertation focuses on two parts of the design of the readout system: the Data Transmission Unit of the ALPIDE chip and the design of fundamental modules for the SRAM-based FPGA of the readout electronics. In the first part, a module of the Data Transmission Unit is designed, optimising the trade-off between power consumption, radiation tolerance, and jitter performance. The design was tested and thoroughly characterised, including tests while under irradiation with a 30 MeV protons. Furthermore the Data Transmission Unit performance was validated after the integration into the first prototypes of ITS modules. In the second part, the problem of developing a radiation-tolerant SRAM-based FPGA design is investigated and a solution is provided. First, a general methodology for designing radiation-tolerant Finite State Machines in SRAM-based FPGAs is analysed, implemented, and verified. Later, the radiation-tolerant FPGA design for the ITS readout is described together with the radiation effects mitigation techniques that were selectively applied to the different modules. The design was tested with multiple irradiation tests and the results are stated below.
This work describes development of a comprehensive methodology for analyzing vibro-acoustic and wear mechanisms in transmission systems. The thesis addresses certain gaps present in the fields of structure dynamics and abrasion mechanism and opens new areas for further research.
The paper attempts to understand new and relatively unexplored challenges like influences of wear on the dynamics of drive train. It also focuses on developing new techniques for analyzing the vibration and acoustic behavior of the drive unit structures and surrounding fluids respectively.
The developed methodology meets the requirements of both the complete system and component level modeling by using specially identified combination of different simulation techniques. Based on the created template model, a three-stage spur plus helical gearbox is constructed and simulated as an application example. In addition to the internal mechanical excitation mechanisms, the transmission model also includes the rotational and translational dynamics of the gears, shafts and bearings. It is followed by illustration of wear among the rotating components.
Different kinds of static and dynamic analyses are performed and coupled at various levels depending on the mechanical complexities involved. Furthermore, the structure dynamic vibration of the housing and the associated sound particle radiations are mapped into the surrounding fluid. Additionally, the approach for selection of the potential parameters for optimization is depicted. Final part focuses on the measurements of different system states used for validation of the model. In the end, results obtained from both simulations and experiments are analyzed and assessed for there respective performances.
Autonomous steering of an electric bicycle based on sensor fusion using model predictive control
(2019)
In this thesis a control and steering module for an autonomous bicycle was developed. Based on sensor fusion and model predictive control, the module is able to trace routes autonomously.
The system is developed to run on a Raspberry Pi. An ultrasonic sensor and a 2D Lidar sensor are used for distance measurements. The vehicle’s position is determined by using GPS signals. Additionally, a camera is used to capture pictures for the roadside detection. In order to recognize the road and the position of the vehicle on it, computer vision techniques are used. The captured images are denoised, Canny edge detection is performed and a perspective transformation is applied. Thereafter a sliding window algorithm selects the edges belonging to the roadside and a second order polynomial is fitted to the selected data. Based on this, the road curvature and the lateral position of the vehicle on the road are calculated. The implemented software is thus able to detect straight and curved roads as well as the vehicle’s lateral offset.
A route planning module was implemented to navigate the vehicle from the start to the destination coordinates. This is done by creating an abstract graph of the roads and using Dijkstra’s algorithm to determine the shortest path.
Four MPC controllers were implemented to control the movements of the vehicle. They are based on state space equations derived from the linear single-track vehicle model. This relatively straightforward model makes it possible to predict the vehicle behavior and is efficient to compute. Each controller was built with different parameters for different vehicle speeds to account for the non-linearity of the system. The controllers simulate the future states of the system at each timeslot and select appropriate control signals for steering, throttle and brakes.
In this thesis, all the components of the steering and control module were individually validated. It was established that the each individual component works as expected and certain constraints and accuracy limits were identified. Finally, the closed loop capabilities of the system were assessed using a test vehicle. Despite some limitations imposed by this setup, it was shown that the control module is indeed capable of autonomously navigating a vehicle and avoiding collisions.
Die vorliegende Dissertation behandelt die Entwicklung eines Verkehrssimulationssystems, welches vollautomatisch aus Landkarten Simulationsgraphen erstellen kann. Der Fokus liegt bei urbanen Simulationsstudien in beliebigen Gemeinden und Städten. Das zweite fundamentale Standbein dieser Arbeit ist daher die Konstruktion von Verkehrsmodellen, die die wichtigsten Verkehrsteilnehmertypen im urbanen Bereich abbilden. Es wurden Modelle für Autos, Fahrräder und Fußgänger entwickelt.
Die Betrachtung des Stands der Forschung in diesem Bereich hat ergeben, dass die Verknüpfung von automatischer Grapherstellung und Modellen, die die Wechselwirkungen der verschiedenen Verkehrsteilnehmertypen abbilden, von keinem vorhandenen System geleistet wird. Es gibt grundlegend zwei Gruppen von Verkehrssimulationssystemen. Zum Einen existieren Systeme, die hohe Genauigkeiten an Simulationsergebnissen erzielen und dafür exakte (teil-)manuelle Modellierung der Gegebenheiten im zu simulierenden Bereich benötigen. Es werden in diesem Bereich meist Verkehrsmodelle simuliert, die die Verhaltensweisen der Verkehrsteilnehmer sehr gut abbilden und hierfür einen hohen Berechnungsaufwand benötigen. Auf der anderen Seiten existieren Simulationssysteme, die Straßengraphen automatisch erstellen können, darauf jedoch sehr vereinfachte Verkehrsmodelle simulieren. Es werden meist nur Autobewegungen simuliert. Der Nutzen dieser Herangehensweise ist die Möglichkeit, sehr große Szenarien simulieren zu können.
Im Rahmen dieser Arbeit wird ein System mit Eigenschaften beider grundlegenden Ansätze entwickelt, um multimodalen innerstädtischen Verkehr auf Basis automatisch erstellter Straßengraphen simulieren zu können. Die Entwicklung eines neuen Verkehrssimulationssystems erschien notwendig, da sich zum Zeitpunkt der Literaturbetrachtung kein anderes vorhandenes System für die Nutzung zur Erfüllung der genannten Zielstellung eignete. Das im Rahmen dieser Arbeit entwickelte System heißt MAINSIM (MultimodAle INnerstädtische VerkehrsSIMulation).
Die Simulationsgraphen werden aus Kartenmaterial von OpenStreetMap extrahiert. Kartenmaterial wird zuerst in verschiedene logische Layer separiert und anschließend zur Bestimmung eines Graphen des Straßennetzes genutzt. Eine Gruppe von Analyseschritten behebt Ungenauigkeiten im Kartenmaterial und ergänzt Informationen, die während der Simulation benötigt werden (z.B. die Verbindungsrichtung zwischen zwei Straßen). Das System verwendet Geoinformationssystemkomponenten zur Verarbeitung der Geodaten. Dies birgt den Vorteil der einfachen Erweiterbarkeit um weitere Datenquellen.
Die Verkehrssimulation verwendet mikroskopische Verhaltensmodelle. Jeder einzelne Verkehrsteilnehmer wird somit simuliert. Das Modell für Autos basiert auf dem in der Verkehrsforschung weit genutzten Nagel-Schreckenberg-Modell. Es verfügt jedoch über zahlreiche Modifikationen und Erweiterungen, um das Modell auch abseits von Autobahnen nutzen zu können und weitere Verhaltensweisen zu modellieren. Das Fahrradmodell entsteht durch geeignete Parametrisierung aus dem Automodell. Zur Entwicklung des Fußgängermodells wurde Literatur über das Verhalten von Fußgängern diskutiert, um daraus geeignete Eigenschaften (z.B. Geschwindigkeiten und Straßenüberquerungsverhaltensmuster) abzuleiten. MAINSIM ermöglicht folglich die Betrachtung des Verkehrsgeschehens auch aus der Sicht der Gruppe der Fußgänger oder Fahrradfahrer und kann deren Auswirkungen auf den Straßenverkehr einer ganzen Stadt bestimmen.
Das Automodell wurde auf Autobahnszenarien und innerstädtischen Straßengraphen evaluiert. Es konnte die gut verstandenen Zusammenhänge zwischen Verkehrsdichte, -fluss und -geschwindigkeit reproduzieren. Zur Evaluierung von Fahrradmodellen liegen nach dem besten Wissen des Autors keine Studien vor. Daher wurden an dieser Stelle der Einfluss der Fahrradfahrer auf den Straßenverkehr und die von Fahrrädern gefahrenen Geschwindigkeiten untersucht. Das Fußgängermodell konnte die aus der Literaturbetrachtung ermittelten Verhaltensweisen abbilden.
Nachdem die wichtigsten Komponenten von MAINSIM untersucht wurden, begannen Fallstudien, die verschiedene Gebiete abdecken. Die wichtigsten Ergebnisse aus diesem Teil der Arbeit sind:
- Es ist möglich, mit Hilfe maschineller Lernverfahren Staus innerhalb Frankfurts vorherzusagen.
- Nonkonformismus bezüglich der Verkehrsregeln kann je nach Verhalten den Verkehrsfluss empfindlich beeinflussen, kann aber auch ohne Effekt bleiben.
- Mit Hilfe von Kommunikationstechniken könnte in der Zukunft die Routenplanung von Autos verbessert werden. Ein Verfahren auf Basis von Pheromonspuren wurde im Rahmen dieser Arbeit untersucht.
- MAINSIM eignet sich zur Simulation großer Szenarien. In der letzten Fallstudie dieser Arbeit wurde der Autoverkehr eines Simulationsgebietes um Frankfurt am Main herum mit ca. 1,6 Mio. Trips pro Tag simuliert. Da MAINSIM über ein Kraftstoffverbrauchs- und CO2-Emissionsmodell verfügt, konnten die CO2-Emissionen innerhalb von Frankfurt ermittelt werden. Eine angekoppelte Simulation des Wetters mit Hilfe einer atmosphärischen Simulation zeigte, wie sich die Gase innerhalb Frankfurts verteilen.
Für den professionellen Einsatz in der Verkehrsforschung muss das entwickelte Simulationssystem um eine Methode zur Kalibrierung auf Sensordaten im Simulationsgebiet erweitert werden. Die vorhandenen Ampelschaltungen bilden nicht reale Ampeln ab. Eine Erweiterung des Systems um die automatische Integrierung maschinell lesbarer Schaltpläne von Ampeln im Bereich des Simulationsgebietes würde die Ergebnisgüte weiter erhöhen.
MAINSIM hat mehrere Anwendungsgebiete. Es können sehr schnell Simulationsgebiete modelliert werden. Daher bietet sich die Nutzung für Vorabstudien an. Wenn große Szenarien simuliert werden müssen, um z.B. die Verteilung der CO2-Emissionen innerhalb einer Stadt zu ermitteln, kann MAINSIM genutzt werden. Es hat sich im Rahmen dieser Arbeit gezeigt, dass Fahrräder und Fußgänger einen Effekt auf die Mengen des Kraftstoffverbrauchs von Autos haben können. Es sollte bei derartigen Szenarien folglich ein Simulationssysytem genutzt werden, welches die relevanten Verkehrsteilnehmertypen abbilden kann. Zur Untersuchung weiterer wissenschaftlicher Fragestellungen kann MAINSIM beliebig erweitert werden.
Cyber Physical Systems (CPS) are growing more and more complex due to the availability of cheap hardware, sensors, actuators and communication links. A network of cooperating CPSs (CPN) additionally increases the complexity. This poses challenges as well as it offers chances: the increasing complexity makes it harder to design, operate, optimize and maintain such CPNs. However, on the other side an appropriate use of the increasing resources in computational nodes, sensors, actuators can significantly improve the system performance, reliability and flexibility. Therefore, self-X features like self-organization, self-adaptation and self-healing are key principles for such systems.
Additionally, CPNs are often deployed in dynamic, unpredictable environments and safety-critical domains, such as transportation, energy, and healthcare. In such domains, usually applications of different criticality level exist. In an automotive environment for example, the brake has a higher criticality level regarding safety as the infotainment. As a result of mixed-criticality, applications requiring hard real-time guarantees compete with those requiring soft real-time guarantees and best-effort application for the given resources within the overall system. This leads to the need to accommodate multiple levels of criticality while ensuring safety and reliability, which increases the already high complexity even more.
This thesis deals with the question on how to conveniently, effectively and efficiently handle the management and complexity of mixed-critical CPNs (MC-CPNs). Since this cannot be done by the system developer without the assistance of the system itself any longer, it is essential to develop new approaches and techniques to ensure that such systems can operate under a range of conditions while meeting stringent requirements.
Based on five research hypothesis, this thesis introduces a comprehensive adaptive mixed-criticality supporting middleware for Cyber-Physical Networks (Chameleon), which efficiently and autonomously takes care of the management and complexity of CPNs with regard to the mixed-criticality aspect.
Chameleon contributes to the state-of-art by introducing and combining the following concepts:
- A comprehensive self-adaption mechanism on all levels of the system model is provided.
- This mechanism allows a flexible combination of parametric and structural adaptation actions (relocation, scheduling, tuning, ...) to modify the behavior of the system.
- Real-time constraints of mixed-critical applications (hard real-time, soft real-time, best-effort) are considered in all possible adaptation conditions and actions by the use of the importance parameter.
- CPNs are supported by the introduction of different scopes (local, system, global) for the adaptation conditions and actions. This also enables the combination of different scopes for conditions and actions.
- The realization of the adaptation with a MAPE-K loop instantiated by a distributed LCS allows for real-time capable reasoning of adaptation actions which also works on resource-spare systems.
- The developed rule language Rango offers an intuitive way to specify an initial rule set for LCS in the context of CPS/CPNs and supports the system administrators in the process of rule set generation.