Universitätspublikationen
Refine
Year of publication
Document Type
- Doctoral Thesis (65) (remove)
Has Fulltext
- yes (65) (remove)
Is part of the Bibliography
- no (65)
Keywords
- ALICE (2)
- FPGA (2)
- Organic Computing (2)
- Abfrageverarbeitung (1)
- Abstraction (1)
- Affymetrix (1)
- Agent <Künstliche Intelligenz> (1)
- Agenten (1)
- Agents (1)
- Analog (1)
Institute
- Informatik (65) (remove)
This thesis contributes to the field of machine learning with a specific focus on the methods for learning relations between the inputs. Learning relationships between images is the most common primitive in vision. There are many vision tasks in which relationships across images play an important role. Some of them are motion estimation, activity recognition, stereo vision, multi-view geometry and visual odometry. Many of such tasks mainly depend on motion and disparity cues, which are inferred based on the relations across multiple image pairs. The approaches presented in this thesis mainly deal with, but are not limited to, learning of the representations for motion and depth. This thesis by articles consists of five articles which present relational feature learning models along with their applications in computer vision. In the first article, we present an approach for encoding motion in videos. To this end, we show that the detection of spatial transformations can be viewed as detection of coincidence or synchrony between the given sequence of frames and a sequence of features which are related by the transformation we wish to detect. Learning to detect synchrony is possible by introducing "multiplicative interactions'' into the hidden units of single layered sparse coding models.
We show that the learned motion representations employed for the task of activity recognition achieve competitive performance on multiple benchmarks. Stereo vision is an important challenge in computer vision and useful for many applications in that field. In the second article, we extend the energy based learning models, which were previously used for motion encoding, to the context of depth perception. Given the common architecture of the models for encoding motion and depth, we show that it is possible to define a single model for learning a unified representation for both the cues. Our experimental results show that learning a combined representation for depth and motion makes it possible to achieve state-of-the-art performance at the task of 3-D activity analysis, and to perform better than the existing hand-engineered 3-D motion features. Autoencoder is a popular unsupervised learning method for learning efficient encoding for a given set of data samples. Typically, regularized autoencoders which are used to learn over-complete and sparse representations for the input data, were shown to fail on intrinsically high dimensional data like videos. In the third article, we investigate the reason for such a behavior. It can be observed that the regularized autoencoders typically learn negative hidden unit biases. We show that the learning of negative biases is the result of hidden units being responsible for both the sparsity and the representation of the input data. It is shown that, as a result, the behavior of the model resembles clustering methods which would require exponentially large number of features to model intrinsically high dimensional data. Based on this understanding, we propose a new activation function which decouples the roles of hidden layer and uses linear encoding. This allows to learn representations on data with very high intrinsic dimensionality. We also show that gating connections in the bi-linear models and the single layer models from articles one and two of this thesis can be thought of as a way to attain a linear encoding scheme which allows them to learn good representations on videos. Visual odometry is the task of inferring egomotion of a moving object from visual information such as images and videos. It can primarily be used for the task of localization and has many applications in the fields of robotics and navigation. The work in article four was motivated by the idea of using deep learning techniques, which are successful methods for many vision tasks, for visual odometry. The visual odometry task mainly requires inference of motion and depth information from visual input which can then be mapped to velocity and change in direction. We use relational feature models presented in the articles one and two for inferring a combined motion and depth representation from stereo video sequences. The combined representation is then mapped to discrete velocity and change in direction labels using convolutional neural networks. Our approach is an end-to-end deep learning-based architecture which uses a single type of computational model and learning rule. Preliminary results show that the architecture is capable of learning the mapping from input video to egomotion. Activity recognition is a challenging computer vision task with many real world applications. It is well know that it is a hard task to use computer vision research for real-time applications. In the fifth article of this thesis, we present a real-time activity recognition system based on deep learning based methods. Our approach uses energy based relational feature learning models for the computation of local motion features directly from videos. A bag-of-words over the local motion features is used for the analysis of activity in a given video sequence. We implement this system on a distributed computational platform and demonstrate its performance on the iCub robot. Using GPUs we demonstrate real time performance which makes the deployment of activity recognition systems in real world scenarios possible.
In dieser Arbeit werden Verfahren vorgestellt, mit dem sich hochaufgelöste wissenschaftliche Illustrationen in einem interaktiven Vorgang erstellen lassen. Die Basis dafür bildet die neu eingeführte GPU-basierte Illustrations-Pipeline, in der auf Grundlage eines 3D-Modells Bildebenen frei angelegt und miteinander kombiniert werden können. In einer Ebene wird ein bestimmter Aspekt der Illustration mit einer auswählbaren Technik gezeigt. Die Parameter der Technik sind interaktiv editierbar. Um Effizienz zu gewährleisten ist das gesamte Verfahren so konzipiert, dass es soweit wie möglich die Berechnungen auf der GPU durchführt. So ist es möglich, dass die Illustrationen mit interaktiven Frameraten gerendert werden.
Detectors of modern high-energy physics experiments generate huge data rates during operation. The efficient read-out of this data from the front-end electronics is a sophisticated task, the main challenges, however, may vary from experiment to experiment. The Compressed Baryonic Matter (CBM) experiment that is currently under construction at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt/Germany foresees a novel approach for data acquisition.
Unlike previous comparable experiments that organize data read-out based on global, hierarchical trigger decisions, CBM is based on free-running and self-triggered front-end electronics. Data is pushed to the next stage of the read-out chain rather than pulled from the buffers of the previous stage. This new paradigm requires a completely new development of read-out electronics.
As one part of this thesis, a firmware for a read-out controller to interface such a free-running and self-triggered front-end ASIC, the GET4 chip, was implemented. The firmware in question was developed to run on a Field Programmable Gate Array (FPGA). An FPGA is an integrated circuit whose behavior can be reconfigured "in the field" which offers a lot of flexibility, bugs can be fixed and also completely new features can be added, even after the hardware has already been installed. Due to these general advantages, the usage of FPGAs is desired for the final experiment. However, there is also a drawback to the usage of FPGAs. The only affordable FPGAs today are based on either SRAM or Flash technology and both cannot easily be operated in a radiation environment.
SRAM-based devices suffer severely from Single Event Upsets (SEUs) and Flash-based FPGAs deteriorate too fast from Total Ionizing Dose (TID) effects.
Several radiation mitigation techniques exist for SRAM-based FPGAs, but careful evaluation for each use case is required. For CBM it is not clear if the higher resource consumption of added redundancy, that more or less directly translates in to additional cost, outweighs the advantaged of using FPGAs. In addition, it is even not clear if radiation mitigation techniques (e.g. scrubbing) that were already successfully put into operation in space applications also work as efficiently at the much higher particle rates expected at CBM.
In this thesis, existing radiation mitigation techniques have been analyzed and eligible techniques have been implemented for the above-mentioned read-out controller. To minimize additional costs, redundancy was only implemented for selected parts of the design.
Finally, the radiation mitigated read-out controller was tested by mounting the device directly into a particle beam at Forschungszentrum Jülich. The tests show that the radiation mitigation effect of the implemented techniques remains sound, even at a very high particle flux and with only part of the design protected by costly redundancy.
The promising results of the in-beam tests suggest to use FPGAs in the read-out chain of the CBM-ToF detector.
The number of multilingual texts in the World Wide Web (WWW) is increasing dramatically and a multilingual economic zone like the European Union (EU) requires the availability of multilingual Natural Language Processing (NLP) tools. Due to a rapid development of NLP tools, many lexical, syntactic, semantic and other linguistic features have been used in different NLP applications. However, there are some situations where these features can not be used due the application type or unavailability of NLP resources for some of the languages. That is why an application that is intended to handle multilingual texts must have features that are not dependent on a particular language and specific linguistic tools. In this thesis, we will focus on two such applications: text readability and source and translation classification.
In this thesis, we provide 18 features that are not only suitable for both applications, but are also language and linguistic tools independent. In order to build a readability classifier, we use texts from three different languages: English, German and Bangla. Our proposed features achieve a classification accuracy that is comparable with a classifier using 40 linguistic features. The readability classifier achieves a classification F-score of 74.21% on the English Wikipedia corpus, an F-score of 75.47% on the English textbook corpus, an F-score of 86.46% on the Bangla textbook corpus and an F-score of 86.26% on the German GEO/GEOLino corpus.
We used more than two million sentence pairs from 21 European languages in order to build the source and translation classifier. The classifier using the same eighteen features achieves a classification accuracy of 86.63%. We also used the same features to build a classifier that classifies translated texts based on their origin. The classifier achieves classification accuracy of 75% for texts from 10 European languages. In this thesis, we also provide four different corpora, three for text readability analysis and one for corpus based translation studies.
Local protein synthesis has re-defined our ideas on the basic cellular mechanisms that underlie synaptic plasticity and memory formation. The population of messenger RNAs that are localised to dendrites, however, remains sparsely identified. Furthermore, neuronal morphological complexity and spatial compartmentalisation require efficient mechanisms for messenger RNA localisation and control over translational efficiency or transcript stability. 3’ untranslated regions, downstream from stop codons, are recognised for providing binding platforms for many regulatory units, thus encoding the processing of the above processes. The hippocampus, a part of the brain involved in the formation, organisation and storage of memories, provides a natural platform to investigate patterns of RNA localisation. The hippocampus comprises tissue layers, which naturally separate the principle neuronal cell bodies from their processes (axons and dendrites). Identifying the full-complement of localised transcripts and associated 3’UTR isoforms is of great importance to understand both basic neuronal functions and principles of synaptic plasticity. These findings can be used to study the properties of neuronal networks as well as to understand how these networks malfunction in neuronal diseases.
Here, deep sequencing is used to identify the mRNAs resident in the synaptic neuropil in the hippocampus. Analysis of a neuropil data set yields a list of 8,379 transcripts of which 2,550 are localised in dendrites and/or axons. Using a fluorescent barcode strategy to label individual mRNAs shows that the relative abundance of different mRNAs in the neuropil varies over 5 orders of magnitude. High-resolution in situ hybridisation validated the presence of mRNAs in both cultured neurons and hippocampal slices. Among the many mRNAs identified, a large fraction of known synaptic proteins including signaling molecules, scaffolds and receptors is discovered. These results reveal a previously unappreciated enormous potential for the local protein synthesis machinery to supply, maintain and modify the dendritic and synaptic proteome.
Using advances in library preparation for next generation sequencing experiments, the diversity of 3’UTR isoforms present in localised transcripts from the rat hippocampus is examined. The obtained results indicate that there is an increase in 3’UTR heterogeneity and 3’UTR length in neuronal tissue. The evolutionary importance of the 3’UTR diversity and correlation with changes in species,tissue and cell complexity is investigated. The conducted analysis reveals the population of 3’UTR isoforms required for transcript localisation in overall neuronal transcriptome as well as the regulatory elements and binding sites specific for neuronal compartments. The configuration of poly(A) signals is correlated with gene function and can be further exploit to determine similar mechanisms for alternative polyadenylation.
Usage of custom specified methods for next-generation sequencing as well as novel approaches for RNA quantification and visualisation necessitate the development and implementation of new downstream analytic methods. Library methods for data-mining transcripts annotation, expression and ontology relations is provided. Usage of a specialised search engine targeting key features of previous experiments is proposed. A processing pipeline for NanoString technology, defining experimental quality and exploiting methods for data normalisation is developed. High-resolution in situ images are analysed by custom application, showing a correlation between RNA quantity and spatial distribution. The vast variety of bioinformatic methods included in this work indicates the importance of downstream analysis to reach biological conclusions. Maintaining the integrability and modularity of our implementations is of great priority, as the dynamic nature of many experimental techniques requires constant improvement in computational analysis.
Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus—despite the complexity of LQCD applications—it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D= kernel for a single GPU, achieving 120GFLOPS. D=—the most compute intensive kernel in LQCD simulations—is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD calculations require a sampling of the phase space. The hybrid Monte Carlo (HMC) algorithm performs this. For this task, a single AMD Radeon HD 7970 GPU provides four times the performance of two AMD Opteron 6220 running an optimized reference code. The same advantage is achieved in terms of energy-efficiency. In terms of normalized total cost of acquisition (TCA), GPU-based clusters match conventional large-scale LQCD systems. Contrary to those, however, they can be scaled up from a single node. Examples of large GPU-based systems are LOEWE-CSC and SANAM. On both, CL2QCD has already been used in production for LQCD studies.
Acceleration of Biomedical Image Processing and Reconstruction with FPGAs
Increasing chip sizes and better programming tools have made it possible to increase the boundaries of application acceleration with reconfigurable computer chips. In this thesis the potential of acceleration with Field Programmable Gate Arrays (FPGAs) is examined for applications that perform biomedical image processing and reconstruction. The dataflow paradigm was used to port the analysis of image data for localization microscopy and for 3D electron tomography from an imperative description towards the FPGA for the first time.
After the primitives of image processing on FPGAs are presented, a general workflow is given for analyzing imperative source code and converting it to a hardware pipeline where every node processes image data in parallel. The theoretical foundation is then used to accelerate both example applications. For localization microscopy, an acceleration of 185 compared to an Intel i5 450 CPU was achieved, and electron tomography could be sped up by a factor of 5 over an Nvidia Tesla C1060 graphics card while maintaining full accuracy in both cases.
Ein Ansatz für semantisches Selbstmanagement von verteilten Anwendungen im privaten Lebensumfeld
(2014)
Die Anreicherung des privaten Lebensumfelds mit intelligenten technischen Assistenzsystemen wird in den nächsten Jahrzehnten stark zunehmen. Als Teil dieser Entwicklung wird die Nutzung von externen und hauseigenen IT-Diensten steigen, wodurch sich auch die Komplexität der entstehenden Gesamtsysteme erhöht. Hier sind Ansätze gefordert, diese Systeme auch für technisch nicht versierte Benutzer produktiv nutzbar und beherrschbar zu gestalten, um eine Überforderung zu vermeiden. Im Umfeld häuslicher Dienstplattformen, die eine zentrale Rolle in solchen Systemen übernehmen, nimmt seit ein paar Jahren die Bedeutung der semantischen Modellierung von Diensten stark zu. Diese dient zum einen der formalen Repräsentation von zugehörigen Kontextinformationen, die durch Interaktion mit Sensoren und Aktoren entstehen, und zum anderen der Verbesserung der Interoperabilität zwischen Systemen unterschiedlicher Hersteller. Bisherige Ansätze beschränken sich jedoch auf den Einsatz eines zentralen Rechenknotens zur Ausführung der Dienstplattform und nutzen Semantik – wenn überhaupt – nur zur Verarbeitung von Kontextinformationen. Ein technisches Management des Gesamtsystems findet i.d.R. nicht statt.
Vor diesem Hintergrund ist das Ziel dieser Arbeit die Entwicklung eines Ansatzes für semantisches Selbstmanagement von verteilten dienstbasierten Anwendungen speziell im Umfeld häuslicher Dienstplattformen.
Die vorliegende Arbeit definiert zunächst formale Ontologien für Dienste, Dienstgütemanagement, Selbstmanagement und zugehörige Managementregeln, die zur Laufzeit mit konkreten Diensten und deren erfassten Leistungskenngrößen integriert werden. Durch einen modellgetriebenen Architekturansatz (Model Driven Architecture, MDA) wird ein technologieunabhängiges Management auf abstrakter Ebene ermöglicht, das die Wiederverwendbarkeit von Managementregeln in anderen Szenarien erlaubt.
Dieser Ansatz wird zunächst in eine Architektur für einen hochverfügbaren autonomen Manager überführt, der die Überwachung und Steuerung von Diensten und zugehörigen Dienstplattformen übernehmen kann und auf der aus dem Autonomic Computing bekannten MAPE-K-Kontrollschleife (Monitor, Analyze, Plan, Execute, Knowledge) basiert.
Den Abschluss der Arbeit bildet eine qualitative und quantitative Evaluation (mittels einer OSGi-basierten prototypischen Umsetzung) der erreichten Ergebnisse, die einen Einsatz über die Grenzen des privaten Lebensumfelds hinaus nahelegen.
Die Simulation von Strömung in geklüftet porösen Medien ist von entscheidender Bedeutung in Hinblick auf viele hydrogeologische Anwendungsgebiete, wie beispielsweise der Vorbeugung einer Grundwasserverschmutzung in der Nähe einer Mülldeponie oder einer Endlagerstätte für radioaktive Abfälle, der Förderung fossiler Brennstoffe oder der unterirdischen Speicherung von Kohlendioxid. Aufgrund ihrer Beschaffenheit und insbesondere der großen Permeabilität innerhalb der Klüfte, stellen diese bevorzugte Transportwege dar und können das Strömungsprofil entscheidend beeinflussen. Allerdings stellt die anisotrope Geometrie der Klüfte in Zusammenhang mit den enormen Sprüngen in Parametern wie der Permeabilität auf kleinstem Raum große Anforderungen an die numerischen Verfahren.
Deswegen werden in dieser Arbeit zwei Ansätze zur Modellierung der Klüfte verfolgt. Ein niederdimensionaler Ansatz motiviert durch die anisotrope Geometrie mit sehr geringer Öffnungsweite und sehr langer Erstreckung der Klüfte und ein volldimensionaler Ansatz, der alle Vorgänge innerhalb der Kluft auflöst. Es werden die Ergebnisse dieser Ansätze für Benchmark-Probleme untersucht, mit dem Ergebnis, dass nur bei sehr dünnen Klüften der numerisch günstigere niederdimensionale Ansatz zufriedenstellende Ergebnisse liefert. Weiterhin wird ein Kriterium eingeführt, dass während der Laufzeit anhand von Eigenschaften der Kluft und Strömungsparametern angibt, ob der niederdimensionale Ansatz ausreichende Gültigkeit besitzt. Es wird ein dimensions-adaptiver Ansatz präsentiert, der dann entsprechend dieses Kriteriums einen Wechsel zum volldimensionalen Modell durchführt. Die Ergebnisse zeigen, dass so wesentlich genauere Ergebnisse erzielt werden können, ohne dass eine volle Auflösung in jedem Fall und über den gesamten Rechenzeitraum erforderlich ist.