Refine
Year of publication
Document Type
- Doctoral Thesis (54) (remove)
Language
- English (54) (remove)
Has Fulltext
- yes (54)
Is part of the Bibliography
- no (54)
Keywords
- FPGA (2)
- ALICE (1)
- Abfrageverarbeitung (1)
- Abstraction (1)
- Agent <Künstliche Intelligenz> (1)
- Agenten (1)
- Agents (1)
- Analog (1)
- Analog Circuits (1)
- Analog Verification (1)
Institute
- Informatik (54) (remove)
A central concern in genetics is to identify mechanisms of transcriptional regulation. The aim is to unravel the mapping between the DNA sequence and gene expression. However, it turned out that this is extremely complex. Gene regulation is highly cell type-specific and even moderate changes in gene ex- pression can have functional consequences.
Important contributors to gene regulation are transcription factors (TFs), that are able to directly interact with the DNA. Often, a first step in understanding the effect of a TF on the gene’s regulation is to identify the genomic regions a TF binds to. Therefore, one needs to be aware of the TF’s binding preferences, which are commonly summarized in TF binding motifs. Although for many TFs the binding motif is experimentally validated, there is still a large number of TFs where no binding motif is known. There exist many tools that link TF binding motifs to TFs. We developed the method Massif that improves the performance of such tools by incorporating a domain score that uses the DNA binding domain of the studied TF as additional information.
TF binding sites are often enriched in regulatory elements (REMs) such as promoters or enhancers, where the latter can be located megabases away from its target gene. However, to understand the regulation of a gene it is crucial to know where the REMs of a gene are located. We introduced the EpiRegio webserver that holds REMs associated to target genes predicted across many cell types and tissues using STITCHIT, a previously established method. Our publicly available webserver enables to query for REMs associated to genes (gene query) and REMs overlapping genomic regions (region query). We illus- trated the usefulness of EpiRegio by pointing to a TF that occurs enriched in the REMs of differential expressed genes in circPLOD2 depleted pericytes. Further, we highlighted genes, which are affected by CRISPR-Cas induced mutations in non-coding genomic regions using EpiRegio’s region query. Non-coding genetic variants within REMs may alter gene expression by modifying TF binding sites, which can lead to various kinds of traits or diseases. To understand the underlying molecular mechanisms, one aims to evaluate the effect of such genetic variations on TF binding sites. We developed an accurate and fast statistical approach, that can assess whether a single nucleotide polymorphism (SNP) is regulatory. Further, we combined this approach with epigenetic data and additional analyses in our Sneep workflow. For instance, it enables to identify TFs whose binding preferences are affected by the analyzed SNPs, which is illustrated on eQTL datasets for different cell types. Additionally, we used our Sneep workflow to highlight cardiovascular disease genes using regulatory SNPs and REM-gene interactions.
Overall, the described results allow a better understanding of REM-gene interactions and their interplay with TFs on gene regulation.
With the rise of digitalization and ubiquity of media use, both opportunities and challenges emerge for academic learning. One prevalent challenge is media multitasking, which can become distracting and hinder learning success. This thesis investigates two facets of this issue: the enhancement of data tracking, and the exploration of digital interventions that support self-control.
The first paper focuses on digital tracking of media use, as a comprehensive understanding of digital distractions requires careful data collection to avoid misinterpretations. The paper presents a tracking system where media use is linked to learning activities. An annotation dashboard enabled the enrichment of the log data with self-reports. The efficacy of this system was evaluated in a 14-day online course taken by 177 students, with results confirming the initial assumptions about media tracking.
The second paper tackles the recognition of whether a text was thoroughly read, an issue brought on by the tendency of students to skip lengthy and demanding texts. A method utilizing scroll data and time series classification algorithms is presented and tested, showing promising results for early recognition and intervention.
The third paper presents the results of a systematic literature review on the effectiveness of digital self-control tools in academic learning. The paper identifies gaps in existing research and outlines a roadmap for further research on self-control tools.
The fourth paper shares findings from a survey of 273 students, exploring the practical use and perceived helpfulness of DSCTs. The study highlights the challenge of balancing between too restrictive and too lenient DSCTs, particularly for platforms offering both learning content and entertainment. The results also show a special role of media use that is highly habitual.
The fifth paper of this work investigates facets of app-based habit building. In a study over 27 days, 106 school-aged children used the specially developed PROMPT-app. The children carried out one of three digital activities each day, each of which was supposed to promote a deeper or more superficial processing of plans. Significant differences regarding the processing of plans emerged between the three activities, and the results suggest that a child-friendly planning application needs to be personalized to be effective.
Overall, this work offers a comprehensive insight into the complexity and potentials of dealing with distracting media usage and shows ways for future research and interventions in this fascinating and ever more important field.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
In the last two decades, our understanding of human gene regulation has improved tremendously. There are plentiful computational methods which focus on integrative data analysis of humans, and model organisms, like mouse and drosophila. However, these tools are not directly employable by researchers working on non-model organisms to answer fundamental biological, and evolutionary questions. We aimed to develop new tools, and adapt existing software for the analysis of transcriptomic and epigenomic data of one such non-model organism, Paramecium tetraurelia, an unicellular eukaryote. Paramecium contains two diploid (2n) germline micronuclei (MIC) and a polyploid (800n) somatic macronuclei (MAC). The transcriptomic and epigenomic regulatory landscape of the MAC genome, which has 80% protein-coding genes and short intergenic regions, is poorly understood.
We developed a generic automated eukaryotic short interfering RNA (siRNA) analysis tool, called RAPID. Our tool captures diverse siRNA characteristics from small RNA sequencing data and provides easily navigable visualisations. We also introduced a normalisation technique to facilitate comparison of multiple siRNA-based gene knockdown studies. Further, we developed a pipeline to characterise novel genome-wide endogenous short interfering RNAs (endo-siRNAs). In contrary to many organisms, we found that the endo-siRNAs are not acting in cis, to silence their parent mRNA. We also predicted phasing of siRNAs, which are regulated by the RNA interference (RNAi) pathway.
Further, using RAPID, we investigated the aberrations of endo-siRNAs, and their respective transcriptomic alterations caused by an RNAi pathway triggered by feeding small RNAs against a target gene. We find that the small RNA transcriptome is altered, even if a gene unrelated to RNAi pathway is targeted. This is important in the context of investigations of genetically modified organisms (GMOs). We suggest that future studies need to distinguish transcriptomic changes caused by RNAi inducing techniques and actual regulatory changes.
Subsequently, we adapted existing epigenomics analysis tools to conduct the first comprehensive epigenomic characterisation of nucleosome positioning and histone modifications of the Paramecium MAC. We identified well positioned nucleosomes shifted downstream of the transcription start site. GC content seems to dictate, in cis, the positioning of nucleosomes, histone marks (H3K4me3, H3K9ac, and H3K27me3), and Pol II in the AT-rich Paramecium genome. We employed a chromatin state segmentation approach, on nucleosomes and histone marks, which revealed genes with active, repressive, and bivalent chromatin states. Further, we constructed a regulatory association network of all the aforementioned data, using the sparse partial correlation network technique. Our analysis revealed subsets of genes, whose expression is positively associated with H3K27me3, different to the otherwise reported negative association with gene expression in many other organisms.
Further, we developed a Random Forests classifier to predict gene expression using genic (gene length, intron frequency, etc.) and epigenetic features. Our model has a test performance (PR-AUC) of 0.83. Upon evaluating different feature sets, we found that genic features are as predictive, of gene expression, as the epigenetic features. We used Shapley local feature explanation values, to suggest that high H3K4me3, high intron frequency, low gene length, high sRNA, and high GC content are the most important elements for determining gene expression status.
In this thesis, we developed novel tools, and employed several bioinformatics and machine learning methods to characterise the regulatory landscape of the Paramecium’s (epi)genome.
High-energy physics experiments aim to deepen our understanding of the fundamental structure of matter and the governing forces. One of the most challenging aspects of the design of new experiments is data management and event selection. The search for increasingly rare and intricate physics events asks for high-statistics measurements and sophisticated event analysis. With progressively complex event signatures, traditional hardware-based trigger systems reach the limits of realizable latency and complexity. The Compressed Baryonic Matter experiment (CBM) employs a novel approach for data readout and event selection to address these challenges. Self-triggered, free-streaming detectors push all data to a central compute cluster, called First-level Event Selector (FLES), for software-based event analysis and selection. While this concept solves many issues present in classical architectures, it also sets new challenges for the design of the detector readout systems and online event selection.
This thesis presents an efficient solution to the data management challenges presented by self-triggered, free-streaming particle detectors. The FLES must receive asynchronously streamed data from a heterogeneous detector setup at rates of up to 1 TB/s. The real-time processing environment implies that all components have to deliver high performance and reliability to record as much valuable data as possible. The thesis introduces a time-based data model to partition the input streams into containers of fixed length in experiment time for efficient data management. These containers provide all necessary metadata to enable generic, detector-subsystem-agnostic data distribution across the entire cluster. An analysis shows that the introduced data overhead is well below 1 % for a wide range of system parameters.
Furthermore, a concept and the implementation of a detector data input interface for the CBM FLES, optimized for resource-efficient data transport, are presented. The central element of the architecture is an FPGA-based PCIe extension card for the FLES entry nodes. The hardware designs developed in the thesis enable interfacing with a diverse set of detector systems. A custom, high-throughput DMA design structures data in a way that enables low-overhead access and efficient software processing. The ability to share the host DMA buffers with other devices, such as an InfiniBand HCA, allows for true zero-copy data distribution between the cluster nodes. The discussed FLES input interface is fully implemented and has already proven its reliability in production operation in various physics experiments.
Human readers have the ability to infer knowledge from text, even if that particular information is not explicitly stated. In this thesis, we address the phenomena of text-level implicit information and outline novel automated methods for its recovery.
The main focus of this work is on two types of unexpressed content that arises between sentences (implicit discourse relations) and within sentences (implicit semantic roles).
Traditional approaches mostly rely on costly rich linguistic features, e.g., sentiment or frame-based lexicons, and require heuristics or manual feature engineering.
As an improvement, we propose a collection of generic resource-lean methods, implemented in the form of statistical background knowledge or by means of neural architectures.
Our models are largely language-independent and produce state-of-the-art performance, e.g., in the classification of Chinese implicit discourse relations, or the detection of locally covert predicative arguments in free texts.
In novel experiments, we quantitatively demonstrate that both types of implicit information are mutually dependent insofar as, for instance, some implicit roles directly correlate with implicit discourse relations of similar properties.
We show that implicit information processing further benefits downstream applications and demonstrate its applicability to the higher-level task of narrative story understanding.
In the conclusion of the dissertation, we argue for the need of implicit information processing in order to realize the goal of true natural language understanding.
Multi-view microscopy techniques are used to increase the resolution along the optical axis for 3D imaging. Without this, the resolution is insufficient to resolve subcellular events. In addition, parts of the images of opaque specimens are often highly degraded or masked. Both problems motivate scientists to record the same specimen from multiple directions. The images, then have to be digitally fused into a single high-quality image. Selective-plane illumination microscopy has proven to be a powerful imaging technique due to its unsurpassed acquisition speed and gentle optical sectioning. However, even in the case of multi view imaging techniques that illuminate and image the sample from multiple directions, light scattering inside tissues often severely impairs image contrast.
Here we show that for c-elegans embryos multi view registration can be achieved based on segmented nuclei. However, segmentation of nuclei in high density distribution like c-elegans embryo is challenging. We propose a method which uses 3D Mexican hat filter for preprocessing and 3D Gaussian curvature for the post-processing step to separate nuclei. We used this method successfully on 3 data sets of c-elegans embryos in 3 different views. The result of segmentation outperforms previous methods. Moreover, we provide a simple GUI for manual correction and adjusting the parameters for different data.
We then proposed a method that combines point and voxel registration for an accurate multi view reg- istration of c-elegans embryo, which does not need any special experimental preparation. We demonstrate the performance of our approach on data acquired from fixed embryos of c-elegans worms. This multi step approach is successfully evaluated by comparison to different methods and also by using synthetic data. The proposed method could overcome the typically low resolution along the optical axis and enable stitching to- gether the different parts of the embryo available through the different views. A tool for running the code and analyzing the results is developed.
Programmable hardware in the form of FPGAs found its place in various high energy physics experiments over the past few decades. These devices provide highly parallel and fully configurable data transport, data formatting, and data processing capabilities with custom interfaces, even in rigid or constrained environments. Additionally, FPGA functionalities and the number of their logic resources have grown exponentially in the last few years, making FPGAs more and more suitable for complex data processing tasks. ALICE is one of the four main experiments at the LHC and specialized in the study of heavy-ion collisions. The readout chain of the ALICE detectors makes use of FPGAs at various places. The Read-Out Receiver Cards (RORCs) are one example of FPGA-based readout hardware, building the interface between the custom detector electronics and the commercial server nodes in the data processing clusters of the Data Acquisition (DAQ) system as well as the High Level Trigger (HLT). These boards are implemented as server plug-in cards with serial optical links towards the detectors. Experimental data is received via more than 500 optical links, already partly pre-processed in the FPGAs, and pushed towards the host machines. Computer clusters consisting of a few hundred nodes collect, aggregate, compress, reconstruct, and prepare the experimental data for permanent storage and later analysis. With the end of the first LHC run period in 2012 and the start of Run 2 in 2015, the DAQ and HLT systems were renewed and several detector components were upgraded for higher data rates and event rates. Increased detector link rates and obsolete host interfaces rendered it impossible to reuse the previous RORCs in Run 2.
This thesis describes the development, integration, and maintenance of the next generation of RORCs for ALICE in Run 2. A custom hardware platform, initially developed as a joint effort between the ALICE DAQ and HLT groups in the course of this work, found its place in the Run 2 readout systems of the ALICE and ATLAS experiments. The hardware fulfills all experiment requirements, matches its target performance, and has been running stable in the production systems since the start of Run 2. Firmware and software developments for the hardware evaluation, the design of the board, the mass production hardware tests, as well as the operation of the final board in the HLT, were carried out as part of this work. 74 boards were integrated into the HLT hardware and software infrastructure, with various firmware and software developments, to provide the main experimental data input and output interface of the HLT for Run 2. The hardware cluster finder, an FPGA-based data pre-processing core from the previous generation of RORCs, was ported to the new hardware. It has been improved and extended to meet the experimental requirements throughout Run 2. The throughput of this firmware component could be doubled and the algorithm extended, providing an improved noise rejection and an increased overall mean data compression ratio compared to its previous implementation. The hardware cluster finder forms a crucial component in the HLT data reconstruction and compression scheme with a processing performance of one board equivalent to around ten server nodes for comparable processing steps in software.
The work on the firmware development, especially on the hardware cluster finder, once more demonstrated that developing and maintaining data processing algorithms with the common low-level hardware description methods is tedious and time-consuming. Therefore, a high-level synthesis (HLS) hardware description method applying dataflow computing at an algorithmic level to FPGAs was evaluated in this context. The hardware cluster finder served as an example of a typical data processing algorithm in a high energy physics readout application. The existing and highly optimized low-level implementation provided a reference for comparisons in terms of throughput and resource usage. The cluster finder algorithm could be implemented in the dataflow description with comparably little effort, providing fast development cycles, compact code and at, the same time, simplified extension and maintenance options. The performance results in terms of throughput and resource usage are comparable to the manual implementation. The dataflow environment proved to be highly valuable for design space explorations. An integration of the dataflow description into the HLT firmware and software infrastructure could be demonstrated as a proof of concept. A high-level hardware description could ease both the design space exploration, the initial development, the maintenance, and the extension of hardware algorithms for high energy physics readout applications.
Software evolves. Developers and programmers manifest the needs that arise due to evolving software by making changes to the source code. While developers make such changes, reusing old code and rewriting existing code are inevitable. There are many challenges that a developer faces when manually reusing old code or rewriting existing code. Software tools and program transformation systems aid such reuse or rewriting of program source code. But there are significantly occuring development tasks that are hard to accomplish manually, where the current state-of-the-art tools are still not able to adequately automate these tasks. In this thesis, we discuss some of these unexplored challenges that a developer faces while reusing and rewriting program source code, the significance of such challenges, the existing automation support for these challenges and how we can improve upon them.
Modern software development relies on code reuse, which software developers
typically realize through hand-written abstractions, such as functions,
methods, or classes. However, such abstractions can be challenging to
develop and maintain. An alternative form of reuse is \emph{copy-paste-modify}, in which developers explicitly duplicate source code to adapt the duplicate for a new purpose. Copy-pasted code results in code clones, i.e., groups of code fragments that are similar to each other. Past research strongly suggests that copy-paste-modify is a popular technique among software developers. In this paper, we perform a small user study that shows that copy-paste-modify can be substantially faster to use than manual abstraction.
One might propose that software developers should forego hand-written abstractions in favour of copying and pasting. However, empirical evidence also shows that copy-paste-modify complicates software maintenance and increases the frequency of bugs. Furthermore, the developers in an informal poll we conducted strongly preferred to read code written using abstractions. To address the concern around copy-paste-modify, we propose a tool that merges similar pieces of code and automatically creates suitable abstractions. Our tool allows developers to get the best of both worlds: easy reuse together with custom abstractions. Because different kinds of abstractions may be beneficial in different contexts, our tool provides multiple abstraction mechanisms, which we selected based on a study of popular open-source repositories.
To demonstrate the feasibility of our approach, we have designed and implemented a prototype merging tool for C++ and evaluated our tool on a number of clones exhibiting some variation, i.e near clones, in popular Open Source packages. We observed that maintainers find our algorithmically created abstractions to be largely preferable to existing duplicated code. Rewriting existing code can be considered as a form of program transformation, where a program in one form is transformed into a program in another form. One significant form of program transformation is data representation migration that involves changing the type of a particular data structure, and then updating all of the operations that has a control or data dependence on that data structure according to the new type. Changing the data representation can provide benefits such as improving efficiency and improving the quality of the computed results. Performing such a transformation is challenging, because it requires applying data-type specific changes to code fragments that may be widely scattered throughout the source code connected by dataflow dependencies. Refactoring systems are typically sensitive to dataflow dependencies, but are not programmable with respect to the features of particular data types. Existing program transformation languages provide the needed flexibility, but do not concisely support reasoning about dataflow dependencies.
To address the needs of data representation migration, we propose a new approach to program transformation that relies on a notion of semantic dependency: every transformation step propagates the transformation process onward to code that somehow depends on the transformed code. Our approach provides a declarative transformation specification language, for expressing type-specific transformation rules. We further provide scoped rules, a mechanism for guiding rule application, and tags, a device for simple program analysis within our framework, to enable more powerful program transformations.
We have implemented a prototype transformation system based on these ideas for C and C++ code and evaluate it against three example specifications, including vectorization, transformation of integers to big integers, and transformation of array-of-structs data types to struct-of-arrays format. Our evaluation shows that our approach can improve program performance and the precision of the computed results, and that it scales to programs of at least 3700 lines.
Already today modern driver assistance systems contribute more and more to make individual mobility in road traffic safer and more comfortable. For this purpose, modern vehicles are equipped with a multitude of sensors and actuators which perceive, interpret and react to the environment of the vehicle. In order to reach the next set of goals along this path, for example to be able to assist the driver in increasingly complex situations or to reach a higher degree of autonomy of driver assistance systems, a detailed understanding of the vehicle environment and especially of other moving traffic participants is necessary.
It is known that motion information plays a key role for human object recognition [Spelke, 1990]. However, full 3D motion information is mostly not taken into account for Stereo Vision-based object segmentation in literature. In this thesis, novel approaches for motion-based object segmentation of stereo image sequences are proposed from which a generic environmental model is derived that contributes to a more precise analysis and understanding of the respective traffic scene. The aim of the environmental model is to yield a minimal scene description in terms of a few moving objects and stationary background such as houses, crash barriers or parking vehicles. A minimal scene description aggregates as much information as possible and it is characterized by its stability, precision and efficiency.
Instead of dense stereo and optical flow information, the proposed object segmentation builds on the so-called Stixel World, an efficient superpixel-like representation of space-time stereo data. As it turns out this step substantially increases stability of the segmentation and it reduces the computational time by several orders of magnitude, thus enabling real-time automotive use in the first place. Besides the efficient, real-time capable optimization, the object segmentation has to be able to cope with significant noise which is due to the measurement principle of the used stereo camera system. For that reason, in order to obtain an optimal solution under the given extreme conditions, the segmentation task is formulated as a Bayesian optimization problem which allows to incorporate regularizing prior knowledge and redundancies into the object segmentation.
Object segmentation as it is discussed here means unsupervised segmentation since typically the number of objects in the scene and their individual object parameters are not known in advance. This information has to be estimated from the input data as well.
For inference, two approaches with their individual pros and cons are proposed, evaluated and compared. The first approach is based on dynamic programming. The key advantage of this approach is the possibility to take into account non-local priors such as shape or object size information which is impossible or which is prohibitively expensive with more local, conventional graph optimization approaches such as graphcut or belief propagation.
In the first instance, the Dynamic Programming approach is limited to one-dimensional data structures, in this case to the first Stixel row. A possible extension to capture multiple Stixel rows is discussed at the end of this thesis.
Further novel contributions include a special outlier concept to handle gross stereo errors associated with so-called stereo tear-off edges. Additionally, object-object interactions are taken into account by explicitly modeling object occlusions. These extensions prove to be dramatic improvements in practice.
This first approach is compared with a second approach that is based on an alternating optimization of the Stixel segmentation and of the relevant object parameters in an expectation maximization (EM) sense. The labeling step is performed by means of the _−expansion graphcut algorithm, the parameter estimation step is done via one-dimensional sampling and multidimensional gradient descent. By using the Stixel World and due to an efficient implementation, one step of the optimization only takes about one millisecond on a standard single CPU core. To the knowledge of the author, at the time of development there was no faster global optimization in a demonstrator car.
For both approaches, various testing scenarios have been carefully selected and allow to examine the proposed methods thoroughly under different real-world conditions with limited groundtruth at hand. As an additional innovative application, the first approach was successfully implemented in a demonstrator car that drove the so-called Bertha Benz Memorial Route from Mannheim to Pforzheim autonomously in real traffic.
At the end of this thesis, the limits of the proposed systems are discussed and a prospect on possible future work is given.
The brain is a highly dynamic and variable system: when the same stimulus is presented to the same animal on the same day multiple times, the neural responses show high trial-to-trial variability. In addition, even in the absence of sensory stimulation neural recordings spontaneously show seemingly random activity patterns. Evoked and spontaneous neural variability is not restricted to activity but is also found in structure: most synapses do not survive for longer than two weeks and even those that do show high fluctuations in their efficacy.
Both forms of variability are further affected by stochastic components of neural processing such as frequent transmission failure. At present it is unclear how these observations relate to each other and how they arise in cortical circuits.
Here, we will investigate how the self-organizational processes of neural circuits affect the high variability in two different directions: First, we will show that recurrent dynamics of self-organizing neural networks can account for key features of neural variability. This is achieved in the absence of any intrinsic noise sources by the neural network models learning a predictive model of their environment with sampling-like dynamics. Second, we will show that the same self-organizational processes can compensate for intrinsic noise sources. For this, an analytical model and more biologically plausible models are established to explain the alignment of parallel synapses in the presence of synaptic failure.
Both modeling studies predict properties of neural variability, of which two are subsequently tested on a synapse database from a dense electron microscopy reconstruction from mouse somatosensory cortex and on multi-unit recordings from the visual cortex of macaque monkeys during a passive viewing task. While both analyses yield interesting results, the predicted properties were not confirmed, guiding the next iteration of experiments and modeling studies.
One of the main things that we as humans do in our lifetime is the recognition and/or classification of all kind of visual objects. It is known that about fifty percentage of the neocortex is responsible for visual processing. This fact tells us that object recognition (OR) is a complex task in our and in the animal brain, but we do it in a fraction of a second.
The main question is: How does the brain exactly do it? Does the brain use some feature extraction algorithm for OR tasks? The hierarchical structure of the visual cortex and studies on a part of the visual cortex called V1 tell us that our brain uses feature extraction for OR tasks by Gabor filters. We also use our previous knowledge in object recognition to detect and recognize the objects which we never saw before. Also, as we grow up we learn new objects faster than before.
These facts imply that the visual cortex of human and other animals uses some common (universal) features at least in the first stages to distinguish between different objects. In this context, we might ask: Do universal features in images exist, such that by using them we are able to efficiently recognize any unknown object? Is it necessary to extract new special features for any new object? How about using existing features from other tasks for this? Is it possible to efficiently use extracted feature of a specific task for other tasks? Are there some general features in natural and non-natural images which can also be used for specific object recognition? For example, can we use extracted features of natural images also for handwritten digit classification?
In this context, our work proposes a new information-based approach and tries to give some answers to the questions above. As a result, in our case we found that we could indeed extract unique features which are valid in all three different kinds of tasks. They give classification results that are about as good as the results reported by the corresponding literature for the specialized systems, or even better ones.
Another problem of the OR task is the recognition of objects, independently of any perception changes. We as humans or also animals can recognize objects in spite of many deformations (e.g. changes in illumination, rotation in any direction or angles, distortion and scaling up or down) in a fraction of a second. When observing an object which we never saw, we can imagine the rotated or scaled up objectin our mind. Here, also the question arises: How does the brain solve this problem? To do this, does the brain learn some mapping algorithm (transformation), independent of the objects or their features?
There are many approaches to model the mapping task. One of the most versatile ones is the idea of dynamically changing mappings, the dynamic link mapping (DLM). Although the dynamic link mapping systems show interesting results, the DLM system has the problem of a high computational complexity. In addition, because it uses the least mean squared error as risk function, the performance for classification is also not optimal. For random values where outliers are present, this system may not work well because outliers influence the mean squared error classification much more than probability-based systems. Therefore, we would like to complete the DLM system by a modified approach.
In our contribution, we will introduce a new system which employs the information criteria (i.e. probabilities) to overcome the outlier problem of the DLM systems and has a smaller computational complexity. The new information based selforganised system can solve the problem of invariant object recognition, especially in the task of rotation in depth, and does not have the disadvantage of current DLM systems and has a smaller computational complexity.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
This thesis contributes to the field of machine learning with a specific focus on the methods for learning relations between the inputs. Learning relationships between images is the most common primitive in vision. There are many vision tasks in which relationships across images play an important role. Some of them are motion estimation, activity recognition, stereo vision, multi-view geometry and visual odometry. Many of such tasks mainly depend on motion and disparity cues, which are inferred based on the relations across multiple image pairs. The approaches presented in this thesis mainly deal with, but are not limited to, learning of the representations for motion and depth. This thesis by articles consists of five articles which present relational feature learning models along with their applications in computer vision. In the first article, we present an approach for encoding motion in videos. To this end, we show that the detection of spatial transformations can be viewed as detection of coincidence or synchrony between the given sequence of frames and a sequence of features which are related by the transformation we wish to detect. Learning to detect synchrony is possible by introducing "multiplicative interactions'' into the hidden units of single layered sparse coding models.
We show that the learned motion representations employed for the task of activity recognition achieve competitive performance on multiple benchmarks. Stereo vision is an important challenge in computer vision and useful for many applications in that field. In the second article, we extend the energy based learning models, which were previously used for motion encoding, to the context of depth perception. Given the common architecture of the models for encoding motion and depth, we show that it is possible to define a single model for learning a unified representation for both the cues. Our experimental results show that learning a combined representation for depth and motion makes it possible to achieve state-of-the-art performance at the task of 3-D activity analysis, and to perform better than the existing hand-engineered 3-D motion features. Autoencoder is a popular unsupervised learning method for learning efficient encoding for a given set of data samples. Typically, regularized autoencoders which are used to learn over-complete and sparse representations for the input data, were shown to fail on intrinsically high dimensional data like videos. In the third article, we investigate the reason for such a behavior. It can be observed that the regularized autoencoders typically learn negative hidden unit biases. We show that the learning of negative biases is the result of hidden units being responsible for both the sparsity and the representation of the input data. It is shown that, as a result, the behavior of the model resembles clustering methods which would require exponentially large number of features to model intrinsically high dimensional data. Based on this understanding, we propose a new activation function which decouples the roles of hidden layer and uses linear encoding. This allows to learn representations on data with very high intrinsic dimensionality. We also show that gating connections in the bi-linear models and the single layer models from articles one and two of this thesis can be thought of as a way to attain a linear encoding scheme which allows them to learn good representations on videos. Visual odometry is the task of inferring egomotion of a moving object from visual information such as images and videos. It can primarily be used for the task of localization and has many applications in the fields of robotics and navigation. The work in article four was motivated by the idea of using deep learning techniques, which are successful methods for many vision tasks, for visual odometry. The visual odometry task mainly requires inference of motion and depth information from visual input which can then be mapped to velocity and change in direction. We use relational feature models presented in the articles one and two for inferring a combined motion and depth representation from stereo video sequences. The combined representation is then mapped to discrete velocity and change in direction labels using convolutional neural networks. Our approach is an end-to-end deep learning-based architecture which uses a single type of computational model and learning rule. Preliminary results show that the architecture is capable of learning the mapping from input video to egomotion. Activity recognition is a challenging computer vision task with many real world applications. It is well know that it is a hard task to use computer vision research for real-time applications. In the fifth article of this thesis, we present a real-time activity recognition system based on deep learning based methods. Our approach uses energy based relational feature learning models for the computation of local motion features directly from videos. A bag-of-words over the local motion features is used for the analysis of activity in a given video sequence. We implement this system on a distributed computational platform and demonstrate its performance on the iCub robot. Using GPUs we demonstrate real time performance which makes the deployment of activity recognition systems in real world scenarios possible.
Algorithms for the Maximum Cardinality Matching Problem which greedily add edges to the solution enjoy great popularity. We systematically study strengths and limitations of such algorithms, in particular of those which consider node degree information to select the next edge. Concentrating on nodes of small degree is a promising approach: it was shown, experimentally and analytically, that very good approximate solutions are obtained for restricted classes of random graphs. Results achieved under these idealized conditions, however, remained unsupported by statements which depend on less optimistic assumptions.
The KarpSipser algorithm and 1-2-Greedy, which is a simplified variant of the well-known MinGreedy algorithm, proceed as follows. In each step, if a node of degree one (resp. at most two) exists, then an edge incident with a minimum degree node is picked, otherwise an arbitrary edge is added to the solution.
We analyze the approximation ratio of both algorithms on graphs of degree at most D. Families of graphs are known for which the expected approximation ratio converges to 1/2 as D grows to infinity, even if randomization against the worst case is used. If randomization is not allowed, then we show the following convergence to 1/2: the 1-2-Greedy algorithm achieves approximation ratio (D-1)/(2D-3); if the graph is bipartite, then the more restricted KarpSipser algorithm achieves the even stronger factor D/(2D-2). These guarantees set both algorithms apart from other famous matching heuristics like e.g. Greedy or MRG: these algorithms depend on randomization to break the 1/2-barrier even for paths with D=2. Moreover, for any D our guarantees are strictly larger than the best known bounds on the expected performance of the randomized variants of Greedy and MRG.
To investigate whether KarpSipser or 1-2-Greedy can be refined to achieve better performance, or be simplified without loss of approximation quality, we systematically study entire classes of deterministic greedy-like algorithms for matching. Therefore we employ the adaptive priority algorithm framework by Borodin, Nielsen, and Rackoff: in each round, an adaptive priority algorithm requests one or more edges by formulating their properties---like e.g. "is incident with a node of minimum degree"---and adds the received edges to the solution. No constraints on time and space usage are imposed, hence an adaptive priority algorithm is restricted only by its nature of picking edges in a greedy-like fashion. If an adaptive priority algorithm requests edges by processing degree information, then we show that it does not surpass the performance of KarpSipser: our D/(2D-2)-guarantee for bipartite graphs is tight and KarpSipser is optimal among all such "degree-sensitive" algorithms even though it uses degree information merely to detect degree-1 nodes. Moreover, we show that if degrees of both nodes of an edge may be processed, like e.g. the Double-MinGreedy algorithm does, then the performance of KarpSipser can only be increased marginally, if at all. Of special interest is the capability of requesting edges not only by specifying the degree of a node but additionally its set of neighbors. This enables an adaptive priority algorithm to "traverse" the input graph. We show that on general degree-bounded graphs no such algorithm can beat factor (D-1)/(2D-3). Hence our bound for 1-2-Greedy is tight and this algorithm performs optimally even though it ignores neighbor information. Furthermore, we show that an adaptive priority algorithm deteriorates to approximation ratio exactly 1/2 if it does not request small degree nodes. This tremendous decline of approximation quality happens for graphs on which 1-2-Greedy and KarpSipser perform optimally, namely paths with D=2. Consequently, requesting small degree nodes is vital to beat factor 1/2.
Summarizing, our results show that 1-2-Greedy and KarpSipser stand out from known and hypothetical algorithms as an intriguing combination of both approximation quality and conceptual simplicity.
The constantly increasing memory density and performance of recent Field Programmable Gate Arrays (FPGA) has boosted a usage in many technical applications such as particle accelerators, automotive industry as well as defense and space. Some of these fields of interest are characterized by the presence of ionizing radiation as caused by natural decay or artificial excitation processes. Unfortunately, this type of radiation affects various digital circuits, including transistors forming Static Random Access Memory (SRAM) storage cells that constitute the technology node for high performance FPGAs. Various digital misbehavior in temporal or permanent manner as well as physical destruction of transistors are the consequence. Therefore, the mitigation of such effects becomes an essential design rule when using SRAM FPGAs in ionizing radiation environments. Tolerance against soft errors can be handled across various layers of modern FPGA design, starting with the most basic silicon manufacturing process, towards configuration, firmware, and system design, until finally ending up with application and software engineering. But only a highly optimized, joint concept of system-wide fault tolerance provides sufficient resilience against ionizing radiation effects without losing too much valuable device resources to the safety approach. This concept is introduced, analyzed, improved and validated in the present work. It includes, but is not limited to, static configuration scrubbing, various firmware redundancy approaches, dynamic memory conservation as well as state machine protection. Guidelines are given to improve manual design practices concerning fault tolerance and tools are shown to reduce necessary efforts. Finally, the SysCore development platform has been maintained to support the recommended design methods and act as Device Under Test (DUT) for all particle irradiation experiments that prove the efficiency of the proposed concept of system-wide fault tolerance for SRAM FPGAs in ionizing radiation environments.
Viele auf allgemeinen Graphen NP-schwere Probleme (z.B. Hamiltonkreis, k-Färbbarkeit) sind auf Bäumen einfach effizient zu lösen. Baumzerlegungen, Zerlegungen von Graphen in kleine Teilgraphen entlang von Bäumen, erlauben, dies zu effizienten Algorithmen auf baumähnlichen Graphen zu verallgemeinern. Die Baumähnlichkeit wird dabei durch die Baumweite abgebildet: Je kleiner die Baumweite, desto baumähnlicher der Graph.
Die Bedeutung der Baumzerlegungen wurde seit ihrer Verwendung in einer Reihe von 23 Veröffentlichungen von Robertson und Seymour zur Graphminorentheorie allgemein erkannt. Das Hauptresultat der Reihe war der Beweis des Graphminorensatzes, der aussagt, dass die Minorenrelation auf den Graphen Wohlquasiordnung ist. Baumzerlegungen wurden in verschiedenen Bereichen angewandt. So bei probabilistischen Netzen, in der Biologie, bei kombinatorischen Problemen und im Übersetzerbau. Außerdem gibt es algorithmische Metatheoreme, die zeigen, dass sie für weite Problemklassen nützlich sind. Baumzerlegungen sind in dieser Arbeit von zentraler Bedeutung. Die mittels Baumzerlegungen erzielten Erfolge auf baumähnlichen Graphen motivieren Versuche, diese auf größere Graphklassen zu verallgemeinern. Ein erfolgreicher Ansatz beruht auf irrelevanten Knoten und reduziert damit die Probleme auf der größeren Graphklasse auf Probleme auf einer Graphklasse kleiner Baumweite: Wenn der Eingabegraph zu einem Problem kleine Baumweite hat, wird das Problem mittels Baumzerlegungen gelöst. Andernfalls gibt es einen irrelevanten Knoten, so dass das Problem genau dann eine Lösung auf dem ursprünglichen Graphen hat, wenn es auch im Graphen ohne diesen irrelevanten Knoten eine Lösung hat. Es werden solange irrelevante Knoten gefunden und entfernt, bis ein Graph kleiner Baumweite verbleibt.
Ein wichtiges Hilfsmittel zum Finden irrelevanter Knoten ist der Gitterminorensatz: Nach diesem Satz enthalten Graphen großer Baumweite auch große Gitter als Minoren. Die Gitter Baumweite-Dualität ist auch in der Bidimensionalitätstheorie, einem weiteren erfolgreichen Ansatz, um auf größeren Graphklassen, als nur denen kleiner Baumweite, Probleme effizient zu lösen, von zentraler Bedeutung.
Detectors of modern high-energy physics experiments generate huge data rates during operation. The efficient read-out of this data from the front-end electronics is a sophisticated task, the main challenges, however, may vary from experiment to experiment. The Compressed Baryonic Matter (CBM) experiment that is currently under construction at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt/Germany foresees a novel approach for data acquisition.
Unlike previous comparable experiments that organize data read-out based on global, hierarchical trigger decisions, CBM is based on free-running and self-triggered front-end electronics. Data is pushed to the next stage of the read-out chain rather than pulled from the buffers of the previous stage. This new paradigm requires a completely new development of read-out electronics.
As one part of this thesis, a firmware for a read-out controller to interface such a free-running and self-triggered front-end ASIC, the GET4 chip, was implemented. The firmware in question was developed to run on a Field Programmable Gate Array (FPGA). An FPGA is an integrated circuit whose behavior can be reconfigured "in the field" which offers a lot of flexibility, bugs can be fixed and also completely new features can be added, even after the hardware has already been installed. Due to these general advantages, the usage of FPGAs is desired for the final experiment. However, there is also a drawback to the usage of FPGAs. The only affordable FPGAs today are based on either SRAM or Flash technology and both cannot easily be operated in a radiation environment.
SRAM-based devices suffer severely from Single Event Upsets (SEUs) and Flash-based FPGAs deteriorate too fast from Total Ionizing Dose (TID) effects.
Several radiation mitigation techniques exist for SRAM-based FPGAs, but careful evaluation for each use case is required. For CBM it is not clear if the higher resource consumption of added redundancy, that more or less directly translates in to additional cost, outweighs the advantaged of using FPGAs. In addition, it is even not clear if radiation mitigation techniques (e.g. scrubbing) that were already successfully put into operation in space applications also work as efficiently at the much higher particle rates expected at CBM.
In this thesis, existing radiation mitigation techniques have been analyzed and eligible techniques have been implemented for the above-mentioned read-out controller. To minimize additional costs, redundancy was only implemented for selected parts of the design.
Finally, the radiation mitigated read-out controller was tested by mounting the device directly into a particle beam at Forschungszentrum Jülich. The tests show that the radiation mitigation effect of the implemented techniques remains sound, even at a very high particle flux and with only part of the design protected by costly redundancy.
The promising results of the in-beam tests suggest to use FPGAs in the read-out chain of the CBM-ToF detector.
Effiziente kryptographische Algorithmen sind ein wichtiger Grundstein für viele neue Anwendungen, wie zum Beispiel das Internet der Dinge (IoT) oder kontaktlose Zahlungssysteme. Daher ist es wichtig, dass neue Algorithmen mit verbesserten Sicherheitseigenschaften und speziellen Leistungseigenschaften entwickelt und analysiert werden. Ein Beispiel ist der aktuelle Trend zu leichtgewichtigen Algorithmen. Diese Entwicklungen erleichtern die Implementierung neuartiger Systeme und ermöglichen auch einen Schutz von bestehenden Systemen durch eine Anpassung auf den neuesten Stand der Technik. Neben der kryptologischen Analyse, ist die Bewertung von Implementierungs-Aspekten sehr wichtig, damit eine realistische Einschätzung der erzielbaren Leistung möglich ist.
Daher müssen für jeden neuen Algorithmus unterschiedliche Software- und Hardwarearchitekturen evaluiert werden. Die systematische Bewertung von Software-Implementierungen für unterschiedliche Hardware-Architekturen hat in den letzten Jahren große Fortschritte gemacht, zum Beispiel durch den SHA-3 Wettbewerb. Im Vergleich dazu ist die Evaluation für Hardware-Plattformen wie z.B. FPGAs weiterhin sehr zeitaufwendig und fehleranfällig. Dies liegt an vielen Faktoren, z.B. an den mannigfaltigen Möglichkeiten der verschiedenen Zieltechnologien. Ein möglicher Verbesserungsansatz besteht darin, die Bewertung mit einem abstrakteren Ansatz zu beginnen, um interessante Architekturen und Implementierungen anhand von theoretischen Eigenschaften auszuwählen.
Der erste Hauptbeitrag dieser Arbeit ist die Entwicklung einer abstrakten Bewertungsmethodik, die auf einem theoretischen Modell von getakteten Schaltungen basiert. Das Modell verbessert das Verständnis von Grundeigenschaften dieser Schaltungen und erleichtert auch die abstrakte Modellierung von Architekturen für einen spezifischen Algorithmus. Wenn mehrere verschiedene Architekturen für den gleichen Algorithmus ausgewertet werden, ist es auch möglich zu bestimmen, ob ein Algorithmus gut skaliert. Beispielsweise können Auswirkungen einer Verkleinerung des Datenpfades auf die Größe des Speicherverbrauchs analysiert werden. Basierend auf der entwickelten Methodik können wichtige Eigenschaften, wie der Speicherbedarf, die Anzahl an Taktzyklen oder die Pipeline-Tiefe systematisch bewertet werden. Damit kann eine grobe Schätzung für die Effektivtät einer Architektur abgeleitet werden.
Die Performance-Abschätzung wird auch durch ein theoretisches Konzept der Optimalität der Anzahl an Taktzyklen untermauert. Optimal in diesem Sinne ist eine Architektur, wenn sie verzögerungsfrei ist, d.h. keine Wartezyklen benötigt. Durch die Betrachtung von Datenabhängigkeiten zwischen den einzelnen Runden kann eine minimale und maximale Anzahl an Taktzyklen ermittelt werden. Eine Verletzung dieser Grenzen würde bedeuten, dass die Berechnung der Runden-Funktion nicht alle Ausgangs-Bits produziert hat, wenn diese für die nächste Runde benötigt werden und somit würden Wartezyklen entstehen.
Der zweite Beitrag der Dissertation nutzt die Analysemethodik für mehrere Hash-Funktion. Es werden sechs Hash-Funktionen bewertet: BLAKE, Grøstl, Keccak, JH, Skein und Photon. Die ersten fünf Hash-Funktionen sind die Finalisten des SHA-3 Wettbewerb. Die SHA-3 Finalisten haben eine hohe Sicherheit als oberstes Design-Ziel und nur in zweiter Linie eine hohe Performance. Im Gegensatz dazu wurde Photon für leichtgewichtige Anwendungen konzipiert, z.B. RFID-Tags. Dazu wurde auch die Sicherheit von Photon reduziert. Für jeden Algorithmus wird eine oder mehrere mögliche Organisationensformen des Speichers entwickelt. Als nächstes wird die Anzahl von Taktzyklen auf der Grundlage der Speicherorganisation ermittelt. Das generelle Ziel dabei ist die Entwicklung von Architekturen mit einer optimalen Anzahl von Taktzyklen. Die Diskussion konzentriert sich als nächstes auf verschiedene Möglichkeiten die Runden-Funktion optimal umzusetzen. Das Ergebnis der Evaluierung umfasst mindestens die Schätzung der minimalen Speicheranforderung, die analysierte Pipeline-Tiefe und den theoretischen Durchsatz für lange Nachrichten mit einer festgelegten Taktfrequenz. Diese Ergebnisse lassen eine Einschätzung über die mögliche Leistung der jeweiligen Architekturen zu.
Der dritte Beitrag der Arbeit besteht aus mehreren Implementierungs-Ergebnissen. Zunächst werden Ergebnisse für die SHA-3 Finalisten BLAKE, Grøstl, JH, Keccak und Skein gezeigt. Von den fünf Algorithmen haben alle außer Skein eine relativ hohe Performanz, während Skein abgeschlagen ist. Eine weitere Untersuchung konzentriert sich auf kleinere Implementierungen des SHA-3 Siegers Keccak. Dazu gehören auch nicht standardisierte Varianten mit einem kleineren Zustand. Diese kleineren Versionen werden mit ersten FPGA-Ergebnissen für die Photon Hash-Funktion verglichen. Eine wesentliche Erkenntnis davon ist, dass Keccak auch für FPGA-Anwendungen mit beschränktem Ressourcen-Bedarf prinzipiell sehr wettbewerbsfähig ist.
Modern experiments in heavy ion collisions operate with huge data rates that can not be fully stored on the currently available storage devices. Therefore the data flow should be reduced by selecting those collisions that potentially carry the information of the physics interest. The future CBM experiment will have no simple criteria for selecting such collisions and requires the full online reconstruction of the collision topology including reconstruction of short-lived particles.
In this work the KF Particle Finder package for online reconstruction and selection of short-lived particles is proposed and developed. It reconstructs more than 70 decays, covering signals from all the physics cases of the CBM experiment: strange particles, strange resonances, hypernuclei, low mass vector mesons, charmonium, and open-charm particles.
The package is based on the Kalman filter method providing a full set of the particle parameters together with their errors including position, momentum, mass, energy, lifetime, etc. It shows a high quality of the reconstructed particles, high efficiencies, and high signal to background ratios.
The KF Particle Finder is extremely fast for achieving the reconstruction speed of 1.5 ms per minimum-bias AuAu collision at 25 AGeV beam energy on single CPU core. It is fully vectorized and parallelized and shows a strong linear scalability on the many-core architectures of up to 80 cores. It also scales within the First Level Event Selection package on the many-core clusters up to 3200 cores.
The developed KF Particle Finder package is a universal platform for short- lived particle reconstruction, physics analysis and online selection.