Refine
Year of publication
- 2019 (3) (remove)
Document Type
- Doctoral Thesis (2)
- Master's Thesis (1)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- ALICE (1)
- Dataflow Computing (1)
- Detector Readout (1)
- FPGA (1)
- High Level Synthesis (1)
Institute
- Informatik und Mathematik (2)
- Informatik (1)
Autonomous steering of an electric bicycle based on sensor fusion using model predictive control
(2019)
In this thesis a control and steering module for an autonomous bicycle was developed. Based on sensor fusion and model predictive control, the module is able to trace routes autonomously.
The system is developed to run on a Raspberry Pi. An ultrasonic sensor and a 2D Lidar sensor are used for distance measurements. The vehicle’s position is determined by using GPS signals. Additionally, a camera is used to capture pictures for the roadside detection. In order to recognize the road and the position of the vehicle on it, computer vision techniques are used. The captured images are denoised, Canny edge detection is performed and a perspective transformation is applied. Thereafter a sliding window algorithm selects the edges belonging to the roadside and a second order polynomial is fitted to the selected data. Based on this, the road curvature and the lateral position of the vehicle on the road are calculated. The implemented software is thus able to detect straight and curved roads as well as the vehicle’s lateral offset.
A route planning module was implemented to navigate the vehicle from the start to the destination coordinates. This is done by creating an abstract graph of the roads and using Dijkstra’s algorithm to determine the shortest path.
Four MPC controllers were implemented to control the movements of the vehicle. They are based on state space equations derived from the linear single-track vehicle model. This relatively straightforward model makes it possible to predict the vehicle behavior and is efficient to compute. Each controller was built with different parameters for different vehicle speeds to account for the non-linearity of the system. The controllers simulate the future states of the system at each timeslot and select appropriate control signals for steering, throttle and brakes.
In this thesis, all the components of the steering and control module were individually validated. It was established that the each individual component works as expected and certain constraints and accuracy limits were identified. Finally, the closed loop capabilities of the system were assessed using a test vehicle. Despite some limitations imposed by this setup, it was shown that the control module is indeed capable of autonomously navigating a vehicle and avoiding collisions.
Programmable hardware in the form of FPGAs found its place in various high energy physics experiments over the past few decades. These devices provide highly parallel and fully configurable data transport, data formatting, and data processing capabilities with custom interfaces, even in rigid or constrained environments. Additionally, FPGA functionalities and the number of their logic resources have grown exponentially in the last few years, making FPGAs more and more suitable for complex data processing tasks. ALICE is one of the four main experiments at the LHC and specialized in the study of heavy-ion collisions. The readout chain of the ALICE detectors makes use of FPGAs at various places. The Read-Out Receiver Cards (RORCs) are one example of FPGA-based readout hardware, building the interface between the custom detector electronics and the commercial server nodes in the data processing clusters of the Data Acquisition (DAQ) system as well as the High Level Trigger (HLT). These boards are implemented as server plug-in cards with serial optical links towards the detectors. Experimental data is received via more than 500 optical links, already partly pre-processed in the FPGAs, and pushed towards the host machines. Computer clusters consisting of a few hundred nodes collect, aggregate, compress, reconstruct, and prepare the experimental data for permanent storage and later analysis. With the end of the first LHC run period in 2012 and the start of Run 2 in 2015, the DAQ and HLT systems were renewed and several detector components were upgraded for higher data rates and event rates. Increased detector link rates and obsolete host interfaces rendered it impossible to reuse the previous RORCs in Run 2.
This thesis describes the development, integration, and maintenance of the next generation of RORCs for ALICE in Run 2. A custom hardware platform, initially developed as a joint effort between the ALICE DAQ and HLT groups in the course of this work, found its place in the Run 2 readout systems of the ALICE and ATLAS experiments. The hardware fulfills all experiment requirements, matches its target performance, and has been running stable in the production systems since the start of Run 2. Firmware and software developments for the hardware evaluation, the design of the board, the mass production hardware tests, as well as the operation of the final board in the HLT, were carried out as part of this work. 74 boards were integrated into the HLT hardware and software infrastructure, with various firmware and software developments, to provide the main experimental data input and output interface of the HLT for Run 2. The hardware cluster finder, an FPGA-based data pre-processing core from the previous generation of RORCs, was ported to the new hardware. It has been improved and extended to meet the experimental requirements throughout Run 2. The throughput of this firmware component could be doubled and the algorithm extended, providing an improved noise rejection and an increased overall mean data compression ratio compared to its previous implementation. The hardware cluster finder forms a crucial component in the HLT data reconstruction and compression scheme with a processing performance of one board equivalent to around ten server nodes for comparable processing steps in software.
The work on the firmware development, especially on the hardware cluster finder, once more demonstrated that developing and maintaining data processing algorithms with the common low-level hardware description methods is tedious and time-consuming. Therefore, a high-level synthesis (HLS) hardware description method applying dataflow computing at an algorithmic level to FPGAs was evaluated in this context. The hardware cluster finder served as an example of a typical data processing algorithm in a high energy physics readout application. The existing and highly optimized low-level implementation provided a reference for comparisons in terms of throughput and resource usage. The cluster finder algorithm could be implemented in the dataflow description with comparably little effort, providing fast development cycles, compact code and at, the same time, simplified extension and maintenance options. The performance results in terms of throughput and resource usage are comparable to the manual implementation. The dataflow environment proved to be highly valuable for design space explorations. An integration of the dataflow description into the HLT firmware and software infrastructure could be demonstrated as a proof of concept. A high-level hardware description could ease both the design space exploration, the initial development, the maintenance, and the extension of hardware algorithms for high energy physics readout applications.
Hierarchical self-organizing systems for task-allocation in large scaled distributed architectures
(2019)
This thesis deals with the subject of autonomous, decentralized task allocation in a large scaled multi-core network. The self-organization of such interconnected systems becomes more and more important for upcoming developments. It is to be expected that the complexity of those systems becomes hardly manageable to human users. Self-organization is part of a research field of the Organic Computing initiative, which aims to find solutions for technical systems by imitating natural systems and their processes. Within this initiative, a system for task allocation in a small scaled multi-core network was already developed, researched and published. The system is called the Artificial Hormone System (AHS), since it is inspired by the endocrine system of mammals. The AHS produces a high amount of communication load in case the multi-core network is of a bigger scale.
The contribution of this thesis is two new approaches, both based on the AHS in order to cope with large scaled architectures. The major idea of those two approaches is to introduce a hierarchy into the AHS in order to reduce the produced communication load. The first and more detailed researched approach is called the Hierarchical Artificial Hormone System (HAHS), which orders the processing elements in clusters and builds an additional communication layer between them. The second approach is the Recursive Artificial Hormone System (RAHS), which also clusters the system’s processing elements and orders the clusters into a topological tree structure for communication.
Both approaches will be explained in this thesis by their principle structure as well as some optional methods. Furthermore, this thesis presents estimations for the worst case timing behavior and the worst-case communication load of the HAHS and RAHS. At last, the evaluation results of both approaches, especially in comparison to the AHS, will be shown and discussed.