Refine
Document Type
- Doctoral Thesis (6)
Language
- English (6)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
- Cellular Automaton (2)
- High energy physics (2)
- Many-core computer architectures (2)
- Parallel and SIMD calculations (2)
- CBM experiment (1)
- HPC (1)
- Kalman filter (1)
- STAR experiment (1)
- Short-lived particles (1)
- Tracking (1)
Institute
- Informatik (3)
- Informatik und Mathematik (3)
Acceleration of Biomedical Image Processing and Reconstruction with FPGAs
Increasing chip sizes and better programming tools have made it possible to increase the boundaries of application acceleration with reconfigurable computer chips. In this thesis the potential of acceleration with Field Programmable Gate Arrays (FPGAs) is examined for applications that perform biomedical image processing and reconstruction. The dataflow paradigm was used to port the analysis of image data for localization microscopy and for 3D electron tomography from an imperative description towards the FPGA for the first time.
After the primitives of image processing on FPGAs are presented, a general workflow is given for analyzing imperative source code and converting it to a hardware pipeline where every node processes image data in parallel. The theoretical foundation is then used to accelerate both example applications. For localization microscopy, an acceleration of 185 compared to an Intel i5 450 CPU was achieved, and electron tomography could be sped up by a factor of 5 over an Nvidia Tesla C1060 graphics card while maintaining full accuracy in both cases.
High-energy physics experiments aim to deepen our understanding of the fundamental structure of matter and the governing forces. One of the most challenging aspects of the design of new experiments is data management and event selection. The search for increasingly rare and intricate physics events asks for high-statistics measurements and sophisticated event analysis. With progressively complex event signatures, traditional hardware-based trigger systems reach the limits of realizable latency and complexity. The Compressed Baryonic Matter experiment (CBM) employs a novel approach for data readout and event selection to address these challenges. Self-triggered, free-streaming detectors push all data to a central compute cluster, called First-level Event Selector (FLES), for software-based event analysis and selection. While this concept solves many issues present in classical architectures, it also sets new challenges for the design of the detector readout systems and online event selection.
This thesis presents an efficient solution to the data management challenges presented by self-triggered, free-streaming particle detectors. The FLES must receive asynchronously streamed data from a heterogeneous detector setup at rates of up to 1 TB/s. The real-time processing environment implies that all components have to deliver high performance and reliability to record as much valuable data as possible. The thesis introduces a time-based data model to partition the input streams into containers of fixed length in experiment time for efficient data management. These containers provide all necessary metadata to enable generic, detector-subsystem-agnostic data distribution across the entire cluster. An analysis shows that the introduced data overhead is well below 1 % for a wide range of system parameters.
Furthermore, a concept and the implementation of a detector data input interface for the CBM FLES, optimized for resource-efficient data transport, are presented. The central element of the architecture is an FPGA-based PCIe extension card for the FLES entry nodes. The hardware designs developed in the thesis enable interfacing with a diverse set of detector systems. A custom, high-throughput DMA design structures data in a way that enables low-overhead access and efficient software processing. The ability to share the host DMA buffers with other devices, such as an InfiniBand HCA, allows for true zero-copy data distribution between the cluster nodes. The discussed FLES input interface is fully implemented and has already proven its reliability in production operation in various physics experiments.
The relevant field of interest in High Energy Physics experiments is shifting to searching and studying extremely rare particles and phenomena. The search for rare probes requires an increase in the number of available statistics by increasing the particle interaction rate. The structure of the events also becomes more complicated, the multiplicity of particles in each event increases, and a pileup appears. Due to technical limitations, such data flow becomes impossible to store fully on available storage devices. The solution to the problem is the correct triggering of events and real-time data processing.
In this work, the issue of accelerating and improving the algorithms for reconstruction of the charged particles' trajectories based on the Cellular Automaton in the STAR experiment is considered to implement them for track reconstruction in real-time within the High-Level Trigger. This is an important step in the preparation of the CBM experiment as part of the FAIR Phase-0 program. The study of online data processing methods in real conditions at similar interaction energies allows us to study this process and determine the possible weaknesses of the approach.
Two versions of the Cellular Automaton based track reconstruction are discussed, which are used, depending on the detecting systems' features. HFT~CA Track Finder, similar to the tracking algorithm of the CBM experiment, has been accelerated by several hundred times, using both algorithm optimization and data-level parallelism. TPC~CA Track Finder has been upgraded to improve the reconstruction quality while maintaining high calculation speed. The algorithm was tuned to work with the new iTPC geometry and provided an additional module for very low momentum track reconstruction.
The improved track reconstruction algorithm for the TPC detector in the STAR experiment was included in the HLT reconstruction chain and successfully tested in the express production for the online real data analysis. This made it possible to obtain important physical results during the experiment runtime without the full offline data processing. The tracker is also being prepared for integration into a standard offline data processing chain, after which it will become the basic track search algorithm in the STAR experiment.
Modern experiments in heavy ion collisions operate with huge data rates that can not be fully stored on the currently available storage devices. Therefore the data flow should be reduced by selecting those collisions that potentially carry the information of the physics interest. The future CBM experiment will have no simple criteria for selecting such collisions and requires the full online reconstruction of the collision topology including reconstruction of short-lived particles.
In this work the KF Particle Finder package for online reconstruction and selection of short-lived particles is proposed and developed. It reconstructs more than 70 decays, covering signals from all the physics cases of the CBM experiment: strange particles, strange resonances, hypernuclei, low mass vector mesons, charmonium, and open-charm particles.
The package is based on the Kalman filter method providing a full set of the particle parameters together with their errors including position, momentum, mass, energy, lifetime, etc. It shows a high quality of the reconstructed particles, high efficiencies, and high signal to background ratios.
The KF Particle Finder is extremely fast for achieving the reconstruction speed of 1.5 ms per minimum-bias AuAu collision at 25 AGeV beam energy on single CPU core. It is fully vectorized and parallelized and shows a strong linear scalability on the many-core architectures of up to 80 cores. It also scales within the First Level Event Selection package on the many-core clusters up to 3200 cores.
The developed KF Particle Finder package is a universal platform for short- lived particle reconstruction, physics analysis and online selection.
We live in age of data ubiquity. Even the most conservative estimates predict exponential growth in produced, transmitted and stored data. Big data is used to power business analytics as well as to foster scientific discoveries. In many cases, explosion of produced data exceeds capabilities of digital storage systems. Scientific high-performance computing environments cope with this problem by utilizing large, distributed, storage systems. These complex systems can only provide a high degree of reliability and durability by means of data redundancy. The most straight-forward way of doing that is by replicating the data over different physical devices. However, more elaborate approaches, such as erasure coding, can provide similar data protection while utilizing less storage. Recently, software-defined reliability methods began to replace traditional, hardware- based, solutions. Complicated failure modes of storage system components also warrant checksums to guaranty long-term data integrity. To cope with ever increasing data volumes, flexible and efficient software implementation of error correction codes is of great importance. This thesis introduces a method for realizing a flexible Reed-Solomon erasure code using the “Just-In-Time” compilation technique. By exploiting intrinsic arithmetic redundancy in the algorithm, and by relying on modern optimizing compilers, we obtain a throughput-efficient erasure code implementation. Additionally, exploitation of data parallelism is achieved effortlessly by instructing the compiler to produce SIMD code for desired execution platform. We show results of codes implemented using SSE and AVX2 SIMD instruction sets for x86, and NEON instruction set for ARM platforms. Next, we introduce a framework for efficient vectorized RAID-Z redundancy operations of ZFS file system. Traditional, table-based Galois field multiplication algorithms are replaced with custom SSE and AVX2 parallel methods, providing significantly faster and more efficient parity operations. The implementation of this framework was made publicly available as a part of ZFS on Linux project, since version 0.7. Finally, we propose a new erasure scheme for use with existing, high performance, parallel filesystems. Described reliability middleware (ECCFS) allows definition of flexible, file-based, reliability policies, adapting to customized user needs. By utilizing the block erasure code, the ECCFS achieves optimal storage, computation, and network resource utilization, while providing a high level of reliability. The distributed nature of the middleware allows greater scalability and more efficient utilization of storage and network resources, in order to improve availability of the system.
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on the measurements of very rare probes, which require the experiment to operate under extreme interaction rates of up to 10 MHz. Due to high multiplicity of charged particles in heavy-ion collisions, this will lead to the data rates of up to 1 TB/s. In order to meet the modern achievable archival rate, this data ow has to be reduced online by more than two orders of magnitude.
The rare observables are featured with complicated trigger signatures and require full event topology reconstruction to be performed online. The huge data rates together with the absence of simple hardware triggers make traditional latency limited trigger architectures typical for conventional experiments inapplicable for the case of CBM. Instead, CBM will employ a novel data acquisition concept with autonomous, self-triggered front-end electronics.
While in conventional experiments with event-by-event processing the association of detector hits with corresponding physical event is known a priori, it is not true for the CBM experiment, where the reconstruction algorithms should be modified in order to process non-event-associated data. At the highest interaction rates the time difference between hits belonging to the same collision will be larger than the average time difference between two consecutive collisions. Thus, events will overlap in time. Due to a possible overlap of events one needs to analyze time-slices rather than isolated events.
The time-stamped data will be shipped and collected into a readout buffer in a form of a time-slice of a certain length. The time-slice data will be delivered to a large computer farm, where the archival decision will be obtained after performing online reconstruction. In this case association of hit information with physical events must be performed in software and requires full online event reconstruction not only in space, but also in time, so-called 4-dimensional (4D) track reconstruction.
Within the scope of this work the 4D track finder algorithm for online reconstruction has been developed. The 4D CA track finder is able to reproduce performance and speed of the traditional event-based algorithm. The 4D CA track finder is both vectorized (using SIMD instructions) and parallelized (between CPU cores). The algorithm shows strong scalability on many-core systems. The speed-up factor of 10.1 has been achieved on a CPU with 10 hyper-threaded physical cores.
The 4D CA track finder algorithm is ready for the time-slice-based reconstruction in the CBM experiment.