Refine
Year of publication
Document Type
- Doctoral Thesis (19)
Language
- English (19)
Has Fulltext
- yes (19)
Is part of the Bibliography
- no (19)
Keywords
- ALICE (5)
- CBM experiment (2)
- Cellular Automaton (2)
- FPGA (2)
- GPGPU (2)
- HLT (2)
- HPC (2)
- High energy physics (2)
- Many-core computer architectures (2)
- Parallel and SIMD calculations (2)
Institute
- Informatik und Mathematik (12)
- Informatik (6)
- Physik (1)
As an integral part of ALICE, the dedicated heavy ion experiment at CERN’s Large Hadron Collider, the Transition Radiation Detector (TRD) contributes to the experiment’s tracking, triggering and particle identification. Central element in the TRD’s processing chain is its trigger and readout processor, the Global Tracking Unit (GTU). The GTU implements fast triggers on various signatures, which rely on the reconstruction of up to 20 000 particle track segments to global tracks, and performs the buffering and processing of event raw data as part of a complex detector readout tree.
The high data rates the system has to handle and its dual use as trigger and readout processor with shared resources and interwoven processing paths require the GTU to be a unique, high-performance parallel processing system. To achieve high data taking efficiency, all elements of the GTU are optimized for high running stability and low dead time.
The solutions presented in this thesis for the handling of readout data in the GTU, from the initial reception to the final assembly and transmission to the High-Level Trigger computer farm, address all these aspects. The presented concepts employ multi-event buffering, in-stream data processing, extensive embedded diagnostics, and advanced features of modern FPGAs to build a robust high-performance system that can conduct the high- bandwidth readout of the TRD with maximum stability and minimized dead time. The work summarized here not only includes the complete process from the conceptual layout of the multi-event data handling and segment control, but also its implementation, simulation, verification, operation and commissioning. It also covers the system upgrade for the second data taking period and presents an analysis of the actual system performance.
The presented design of the GTU’s input stage, which is comprised of 90 FPGA-based nodes, is built to support multi-event buffering for the data received from the 18 TRD supermodules on 1080 optical links at the full sender aggregate net bandwidth of 2.16 Tbit/s. With careful design of the control logic and the overall data path, the readout on the 18 concentrator nodes of the supermodule stage can utilize an effective aggregate output bandwidth of initially 3.33 GiB/s, and, after the successful readout bandwidth upgrade, 6.50 GiB/s via 18 optical links. The high possible readout link utilization of more than 99 % and the intermediate buffering of events on the GTU helps to keep the dead time associated with the local event building and readout typically below 10%. The GTU has been used for production data taking since start-up of the experiment and ever since performs the event buffering, local event building and readout for the TRD in a correct, efficient and highly dependable fashion.
Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
A new era in experimental nuclear physics has begun with the start-up of the Large Hadron Collider at CERN and its dedicated heavy-ion detector system ALICE. Measuring the highest energy density ever produced in nucleus-nucleus collisions, the detector has been designed to study the properties of the created hot and dense medium, assumed to be a Quark-Gluon Plasma.
Comprised of 18 high granularity sub-detectors, ALICE delivers data from a few million electronic channels of proton-proton and heavy-ion collisions.
The produced data volume can reach up to 26 GByte/s for central Pb–Pb
collisions at design luminosity of L = 1027 cm−2 s−1 , challenging not only the data storage, but also the physics analysis. A High-Level Trigger (HLT) has been built and commissioned to reduce that amount of data to a storable value prior to archiving with the means of data filtering and compression without the loss of physics information. Implemented as a large high performance compute cluster, the HLT is able to perform a full reconstruction of all events at the time of data-taking, which allows to trigger, based on the information of a complete event. Rare physics probes, with high transverse momentum, can be identified and selected to enhance the overall physics reach of the experiment.
The commissioning of the HLT is at the center of this thesis. Being deeply embedded in the ALICE data path and, therefore, interfacing all other ALICE subsystems, this commissioning imposed not only a major challenge, but also a massive coordination effort, which was completed with the first proton-proton collisions reconstructed by the HLT. Furthermore, this thesis is completed with the study and implementation of on-line high transverse momentum triggers.
Ultrarelativistic Quantum Molecular Dynamics is a physics model to describe the transport, collision, scattering, and decay of nuclear particles. The UrQMD framework has been in use for nearly 20 years since its first development. In this period computing aspects, the design of code, and the efficiency of computation have been minor points of interest. Nowadays an additional issue arises due to the fact that the run time of the framework does not diminish any more with new hardware generations.
The current development in computing hardware is mainly focused on parallelism. Especially in scientific applications a high order of parallelisation can be achieved due to the superposition principle. In this thesis it is shown how modern design criteria and algorithm redesign are applied to physics frameworks. The redesign with a special emphasise on many-core architectures allows for significant improvements of the execution speed.
The most time consuming part of UrQMD is a newly introduced relativistic hydrodynamic phase. The algorithm used to simulate the hydrodynamic evolution is the SHASTA. As the sequential form of SHASTA is successfully applied in various simulation frameworks for heavy ion collisions its possible parallelisation is analysed. Two different implementations of SHASTA are presented.
The first one is an improved sequential implementation. By applying a more concise design and evading unnecessary memory copies, the execution time could be reduced to the half of the FORTRAN version’s execution time. The usage of memory could be reduced by 80% compared to the memory needed in the original version.
The second implementation concentrates fully on the usage of many-core architectures and deviates significantly from the classical implementation. Contrary to the sequential implementation, it follows the recalculate instead of memory look-up paradigm. By this means the execution speed could be accelerated up to a factor of 460 on GPUs.
Additionally a stability analysis of the UrQMD model is presented. Applying metapro- gramming UrQMD is compiled and executed in a massively parallel setup. The resulting simulation data of all parallel UrQMD instances were hereafter gathered and analysed. Hence UrQMD could be proven of high stability to the uncertainty of experimental data.
As a further application of modern programming paradigms a prototypical implementa- tion of the worldline formalism is presented. This formalism allows for a direct calculation of Feynman integrals and constitutes therefore an interesting enhancement for the UrQMD model. Its massively parallel implementation on GPUs is examined.
High-energy physics experiments aim to deepen our understanding of the fundamental structure of matter and the governing forces. One of the most challenging aspects of the design of new experiments is data management and event selection. The search for increasingly rare and intricate physics events asks for high-statistics measurements and sophisticated event analysis. With progressively complex event signatures, traditional hardware-based trigger systems reach the limits of realizable latency and complexity. The Compressed Baryonic Matter experiment (CBM) employs a novel approach for data readout and event selection to address these challenges. Self-triggered, free-streaming detectors push all data to a central compute cluster, called First-level Event Selector (FLES), for software-based event analysis and selection. While this concept solves many issues present in classical architectures, it also sets new challenges for the design of the detector readout systems and online event selection.
This thesis presents an efficient solution to the data management challenges presented by self-triggered, free-streaming particle detectors. The FLES must receive asynchronously streamed data from a heterogeneous detector setup at rates of up to 1 TB/s. The real-time processing environment implies that all components have to deliver high performance and reliability to record as much valuable data as possible. The thesis introduces a time-based data model to partition the input streams into containers of fixed length in experiment time for efficient data management. These containers provide all necessary metadata to enable generic, detector-subsystem-agnostic data distribution across the entire cluster. An analysis shows that the introduced data overhead is well below 1 % for a wide range of system parameters.
Furthermore, a concept and the implementation of a detector data input interface for the CBM FLES, optimized for resource-efficient data transport, are presented. The central element of the architecture is an FPGA-based PCIe extension card for the FLES entry nodes. The hardware designs developed in the thesis enable interfacing with a diverse set of detector systems. A custom, high-throughput DMA design structures data in a way that enables low-overhead access and efficient software processing. The ability to share the host DMA buffers with other devices, such as an InfiniBand HCA, allows for true zero-copy data distribution between the cluster nodes. The discussed FLES input interface is fully implemented and has already proven its reliability in production operation in various physics experiments.
The relevant field of interest in High Energy Physics experiments is shifting to searching and studying extremely rare particles and phenomena. The search for rare probes requires an increase in the number of available statistics by increasing the particle interaction rate. The structure of the events also becomes more complicated, the multiplicity of particles in each event increases, and a pileup appears. Due to technical limitations, such data flow becomes impossible to store fully on available storage devices. The solution to the problem is the correct triggering of events and real-time data processing.
In this work, the issue of accelerating and improving the algorithms for reconstruction of the charged particles' trajectories based on the Cellular Automaton in the STAR experiment is considered to implement them for track reconstruction in real-time within the High-Level Trigger. This is an important step in the preparation of the CBM experiment as part of the FAIR Phase-0 program. The study of online data processing methods in real conditions at similar interaction energies allows us to study this process and determine the possible weaknesses of the approach.
Two versions of the Cellular Automaton based track reconstruction are discussed, which are used, depending on the detecting systems' features. HFT~CA Track Finder, similar to the tracking algorithm of the CBM experiment, has been accelerated by several hundred times, using both algorithm optimization and data-level parallelism. TPC~CA Track Finder has been upgraded to improve the reconstruction quality while maintaining high calculation speed. The algorithm was tuned to work with the new iTPC geometry and provided an additional module for very low momentum track reconstruction.
The improved track reconstruction algorithm for the TPC detector in the STAR experiment was included in the HLT reconstruction chain and successfully tested in the express production for the online real data analysis. This made it possible to obtain important physical results during the experiment runtime without the full offline data processing. The tracker is also being prepared for integration into a standard offline data processing chain, after which it will become the basic track search algorithm in the STAR experiment.
This thesis presents various algorithms which have been developed for on-line event reconstruction in the CBM experiment at GSI, Darmstadt and the ALICE experiment at CERN, Geneve. Despite the fact that the experiments are different — CBM is a fixed target experiment with forward geometry, while ALICE has a typical collider geometry — they share common aspects when reconstruction is concerned.
The thesis describes:
— general modifications to the Kalman filter method, which allows one to accelerate, to improve, and to simplify existing fit algorithms;
— developed algorithms for track fit in CBM and ALICE experiment, including a new method for track extrapolation in non-homogeneous magnetic field.
— developed algorithms for primary and secondary vertex fit in the both experiments. In particular, a new method of reconstruction of decayed particles is presented.
— developed parallel algorithm for the on-line tracking in the CBM experiment.
— developed parallel algorithm for the on-line tracking in High Level Trigger of the ALICE experiment.
— the realisation of the track finders on modern hardware, such as SIMD CPU registers and GPU accelerators.
All the presented methods have been developed by or with the direct participation of the author.
Conceptual design of an ALICE Tier-2 centre integrated into a multi-purpose computing facility
(2012)
This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.
Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus—despite the complexity of LQCD applications—it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D= kernel for a single GPU, achieving 120GFLOPS. D=—the most compute intensive kernel in LQCD simulations—is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD calculations require a sampling of the phase space. The hybrid Monte Carlo (HMC) algorithm performs this. For this task, a single AMD Radeon HD 7970 GPU provides four times the performance of two AMD Opteron 6220 running an optimized reference code. The same advantage is achieved in terms of energy-efficiency. In terms of normalized total cost of acquisition (TCA), GPU-based clusters match conventional large-scale LQCD systems. Contrary to those, however, they can be scaled up from a single node. Examples of large GPU-based systems are LOEWE-CSC and SANAM. On both, CL2QCD has already been used in production for LQCD studies.