Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (91)
- Article (59)
- Bachelor Thesis (18)
- Book (16)
- Master's Thesis (10)
- Conference Proceeding (4)
- Contribution to a Periodical (4)
- Habilitation (2)
- Preprint (2)
- Diploma Thesis (1)
Has Fulltext
- yes (207)
Is part of the Bibliography
- no (207)
Keywords
- Machine Learning (5)
- NLP (5)
- ALICE (3)
- Annotation (3)
- Machine learning (3)
- Text2Scene (3)
- TextAnnotator (3)
- Virtual Reality (3)
- mathematics education (3)
- Artificial intelligence (2)
Institute
- Informatik und Mathematik (207) (remove)
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
For the class of balanced, irreducible Pólya urn schemes with two colours, say black and white, limit theorems for the number of black balls after n steps are known. Depending on the ratio of the eigenvalues of the replacement matrix, two regimes of limit laws occur: almost sure convergence to a non-degenerate random variable whose distribution depends on the initial composition of the urn and that is known to be not normally distributed and weak convergence to the normal distribution. In this thesis, upper bounds on the rates of convergence in both the non-normal limit case and the normal limit case are given.
Recently, Aumüller and Dietzfelbinger proposed a version of a dual-pivot Quicksort, called "Count", which is optimal among dual-pivot versions with respect to the average number of key comparisons required. In this master's thesis we provide further probabilistic analysis of "Count". We derive an exact formula for the average number of swaps needed by "Count" as well as an asymptotic formula for the variance of the number of swaps and a limit law. Also for the number of key comparisons the asymptotic variance and a limit law are identified. We also consider both complexity measures jointly and find their asymptotic correlation.
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on the measurements of very rare probes, which require the experiment to operate under extreme interaction rates of up to 10 MHz. Due to high multiplicity of charged particles in heavy-ion collisions, this will lead to the data rates of up to 1 TB/s. In order to meet the modern achievable archival rate, this data ow has to be reduced online by more than two orders of magnitude.
The rare observables are featured with complicated trigger signatures and require full event topology reconstruction to be performed online. The huge data rates together with the absence of simple hardware triggers make traditional latency limited trigger architectures typical for conventional experiments inapplicable for the case of CBM. Instead, CBM will employ a novel data acquisition concept with autonomous, self-triggered front-end electronics.
While in conventional experiments with event-by-event processing the association of detector hits with corresponding physical event is known a priori, it is not true for the CBM experiment, where the reconstruction algorithms should be modified in order to process non-event-associated data. At the highest interaction rates the time difference between hits belonging to the same collision will be larger than the average time difference between two consecutive collisions. Thus, events will overlap in time. Due to a possible overlap of events one needs to analyze time-slices rather than isolated events.
The time-stamped data will be shipped and collected into a readout buffer in a form of a time-slice of a certain length. The time-slice data will be delivered to a large computer farm, where the archival decision will be obtained after performing online reconstruction. In this case association of hit information with physical events must be performed in software and requires full online event reconstruction not only in space, but also in time, so-called 4-dimensional (4D) track reconstruction.
Within the scope of this work the 4D track finder algorithm for online reconstruction has been developed. The 4D CA track finder is able to reproduce performance and speed of the traditional event-based algorithm. The 4D CA track finder is both vectorized (using SIMD instructions) and parallelized (between CPU cores). The algorithm shows strong scalability on many-core systems. The speed-up factor of 10.1 has been achieved on a CPU with 10 hyper-threaded physical cores.
The 4D CA track finder algorithm is ready for the time-slice-based reconstruction in the CBM experiment.
Modern mobile devices offer a great variety of data that can be recorded. This broad range of information offers the possibility to tailor applications more to the needs of a user. Several context information can be collected, like e.g. information about position or movement. Besides integrated sensors, a broad range of additional sensors are available which can be connected to a mobile device. These additional sensors offer for example the possibility to measure physiological signals of a user.The human body offers a broad range of different signals. These signals have been used in several examples to conclude on the state of a user. The different signals allow to get a deeper insight into emotional or mental state of a user. Electrodermal activity gives feedback about the current arousal level of a user. Heart rate and heart rate variability can give an estimation about valence and mental load of a user. Several models exist to conclude from information like valence and arousal on different emotional states. Russell defined a two dimensional model, using valence and arousal to define affective states. Yerkes and Dodson developed a curve that expresses the relationship between arousal and performance of a user. Different examples exist, that use physiological signals to determine the user state for tailoring and adapting of applications. At the time of this work most of these examples did not address the usage of physiological signals for user state estimation in mobile applications and in mobile scenarios. Mobile scenarios lead to several challenges that need to be addressed. Influencing factors on physiological signals, like e.g. movement have to be controlled. Furthermore a user might be interrupted and influenced by environmental aspects. The combination of physiological data and context information might improve the interpretation of user state in mobile scenarios. In this work, we present a model that addresses the challenges of usage in mobile scenarios to offer an estimation of user state to mobile applications. To address a broad range of mobile applications, affective and cognitive state are provided as output. As input heart rate and electrodermal activity are used, as well as context information about movement and performance. Electrodermal activity is measured by a simple sensor that can be worn as a wristband. Heart rate is measured by a chest strap as used in sports. The input channels are transformed to affective and cognitive state based on a fuzzy rule based approach. With help of fuzzy logic, uncertainty can be expressed and the data continuously being processed. At the start, input channels are fuzzified by defined functions. After a that, a first fuzzy rule set transforms the input signals into values for valence, arousal and mental load. In a second step, these values and context information are transformed with another fuzzy rule set to values for affective and cognitive state. Affective state is based on the model of Russell, where valence and arousal are used to determine different emotional states. The output of the model are eight different affective states (alarmed, excited, happy, relaxed, tired, bored, sad and frustrated), which can have a high, medium, low or very low value as output. Cognitive state is determined based on mental load and context information about performance and movement. The output value can be very high, high, medium or low. The model was implemented as background service for Android devices. Different applications have been used for evaluation of the model. The model has been integrated in a multiplayer space shooter game, called ”Zone of Impulse”, which mainly benefits from the affective state. Cognitive state is more addressed in applications like a simple vocable trainer, which adapts difficulty based on user state. A study to evaluate different aspects of the model has been conducted. The study was designed to investigate the suitability of the model for mobile scenarios. The game ”zone of impulse” and the vocable trainer have been investigated in different configurations. Versions with integrated model have been compared to version of the applications without model, as well as versions of the model without context information. In total 41 participants took part in the study. A part of the participants had to do the tasks of the study in a mobile scenario, walking around several streets. The remaining participants had to do the tasks in a controlled environment in a sitting position. Different aspects were collected with ratings and questionnaires. Overall, participants rated that they did not feel impaired by the sensors they had to wear. The results showed, that the combination of physiological data and context information had an advantage against versions without context information in part of the ratings. A comparison between versions with and without model showed, that the subjective mental load ratings were significantly better for the version with model. Subjective ratings for aspects like fun, overstrain and support were mixed. When comparing the application versions in indoor and outdoor scenarios, no significant difference could be found, which leads to the assumption that there is no loss of interpretation quality in outdoor scenarios. The results also showed that the model seems to be robust enough to compensate the loss of an input channel, as there was no significant difference between application versions with full integrated model and versions with one channel lost. With the model developed in this work, context information and physiological data were combined to improve user state estimation. Furthermore pitfalls of user state estimation in mobile scenarios are overcome with this combination. However, the model has only been evaluated with a limited amount of applications and situations that mobile scenarios offer.
Data-parallel programming is more important than ever since serial performance is stagnating. All mainstream computing architectures have been and are still enhancing their support for general purpose computing with explicitly data-parallel execution. For CPUs, data-parallel execution is implemented via SIMD instructions and registers. GPU hardware works very similar allowing very efficient parallel processing of wide data streams with a common instruction stream.
These advances in parallel hardware have not been accompanied by the necessary advances in established programming languages. Developers have thus not been enabled to explicitly state the data-parallelism inherent in their algorithms. Some approaches of GPU and CPU vendors have introduced new programming languages, language extensions, or dialects enabling explicit data-parallel programming. However, it is arguable whether the programming models introduced by these approaches deliver the best solution. In addition, some of these approaches have shortcomings from a hardware-specific focus of the language design. There are several programming problems for which the aforementioned language approaches are not expressive and flexible enough.
This thesis presents a solution tailored to the C++ programming language. The concepts and interfaces are presented specifically for C++ but as abstract as possible facilitating adoption by other programming languages as well. The approach builds upon the observation that C++ is very expressive in terms of types. Types communicate intention and semantics to developers as well as compilers. It allows developers to clearly state their intentions and allows compilers to optimize via explicitly defined semantics of the type system.
Since data-parallelism affects data structures and algorithms, it is not sufficient to enhance the language's expressivity in only one area. The definition of types whose operators express data-parallel execution automatically enhances the possibilities for building data structures. This thesis therefore defines low-level, but fully portable, arithmetic and mask types required to build a flexible and portable abstraction for data-parallel programming. On top of these, it presents higher-level abstractions such as fixed-width vectors and masks, abstractions for interfacing with containers of scalar types, and an approach for automated vectorization of structured types.
The Vc library is an implementation of these types. I developed the Vc library for researching data-parallel types and as a solution for explicitly data-parallel programming. This thesis discusses a few example applications using the Vc library showing the real-world relevance of the library. The Vc types enable parallelization of search algorithms and data structures in a way unique to this solution. It shows the importance of using the type system for expressing data-parallelism. Vc has also become an important building block in the high energy physics community. Their reliance on Vc shows that the library and its interfaces were developed to production quality.
In the first part of the thesis, we show that the payment flow of a linear tax on trading gains from a security with a semimartingale price process can be constructed for all càglàd and adapted trading strategies. It is characterized as the unique continuous extension of the tax payments for elementary strategies w.r.t. the convergence "uniformly in probability". In this framework, we prove that under quite mild assumptions dividend payoffs have almost surely a negative effect on investor’s after-tax wealth if the riskless interest rate is always positive. In addition, we give an example for tax-efficient strategies for which the tax payment flow can be computed explicitly.
In the second part of the thesis, we investigate the impact of capital gains taxes on optimal investment decisions in a quite simple model. Namely, we consider a risk neutral investor who owns one risky stock from which she assumes that it has a lower expected return than the riskless bank account and determine the optimal stopping time at which she sells the stock to invest the proceeds in the bank account up to the maturity date. In the case of linear taxes and a positive riskless interest rate, the problem is nontrivial because at the selling time the investor has to realize book profits which triggers tax payments. We derive a boundary that is continuous and increasing in time, and decreasing in the volatility of the stock such that the investor sells the stock at the first time its price is smaller or equal to this boundary.
On development, feasibility, and limits of highly efficient CPU and GPU programs in several fields
(2013)
With processor clock speeds having stagnated, parallel computing architectures have achieved a breakthrough in recent years. Emerging many-core processors like graphics cards run hundreds of threads in parallel and vector instructions are experiencing a revival. Parallel processors with many independent but simple arithmetical logical units fail executing serial tasks efficiently. However, their sheer parallel processing power makes them predestined for parallel applications while the simple construction of their cores makes them unbeatably power efficient. Unfortunately, old programs cannot profit by simple recompilation. Adaptation often requires rethinking and modifying algorithms to make use of parallel execution. Many applications have some serial subroutines which are very hard to parallelize, hence contemporary compute clusters are often homogeneous, offering fast processors for serial tasks and parallel processors for parallel tasks. In order not to waste the available compute power, highly efficient programs are mandatory.
This thesis is about the development of fast algorithms and their implementations on modern CPUs and GPUs, about the maximum achievable efficiency with respect to peak performance and to power consumption respectively, and about feasibility and limits of programs for CPUs, GPUs, and heterogeneous systems. Three totally different applications from distinct fields, which were developed in the extent of this thesis, are presented.
The ALICE experiment at the LHC particle collider at CERN studies heavy-ion collisions at high rates of several hundred Hz, while every collision produces thousands of particles, whose trajectories must be reconstructed. For this purpose, ALICE track reconstruction and ALICE track merging have been adapted for GPUs and deployed on 64 GPU-enabled compute-nodes at CERN.
After a testing phase, the tracker ran in nonstop operation during 2012 providing full real-time track reconstruction. The tracker employs a multithreaded pipeline as well as asynchronous data transfer to ensure continuous GPU utilization and outperforms the fastest available CPUs by about a factor three.
The Linpack benchmark is the standard tool for ranking compute clusters. It solves a dense system of linear equations using primarily matrix multiplication facilitated by a routine called DGEMM. A heterogeneous GPU-enabled version of DGEMM and Linpack has been developed, which can utilize the CAL, CUDA, and OpenCL APIs as backend. Employing this implementation, the LOEWE-CSC cluster ranked place 22 in the November 2010 Top500 list of the fastest supercomputers, and the Sanam cluster achieved the second place in the November 2012 Green500 list of the most power efficient supercomputers. An elaborate lookahead algorithm, a pipeline, and asynchronous data transfer hide the serial CPU-bound tasks of Linpack behind DGEMM execution on the GPU reaching the highest efficiency on GPU-accelerated clusters.
Failure erasure codes enable failure tolerant storage of data and real-time failover, ensuring that in case of a hardware defect servers and even complete data centers remain operational. It is an absolute necessity for present-day computer infrastructure. The mathematical theory behind the codes involves matrix-computations in finite fields, which are not natively supported by modern processors and hence computationally very expensive. This thesis presents a novel scheme for fast encoding matrix generation and demonstrates a fast implementation for the encoding itself, which uses exclusively either integer or logical vector instructions. Depending on the scenario, it is always hitting different hard limits of the hardware: either the maximum attainable memory bandwidth, or the peak instruction throughput, or the PCI Express bandwidth limit when GPUs or FPGAs are used.
The thesis demonstrates that in most cases with respect to the available peak performance, GPU implementations can be as efficient as their CPU counterparts.
With respect to costs or power consumption, they are much more efficient. For this purpose, complex tasks must be split in serial as well as parallel parts and the execution must be pipelined such that the CPU bound tasks are hidden behind GPU execution. Few cases are identified where this is not possible due to PCI Express limitations or not reasonable because practical GPU languages are missing.
Die zentralen Objekte der Dissertation sind Translationsflächen. Dabei handelt es sich um Riemann’sche Flächen, die aus in die euklidische Ebene eingebetteten Polygonen durch Verkleben von parallelen gleichlangen Seiten entstehen. Zwei Translationsflächen sind gleich, wenn es möglich ist, die Polygone durch ”Zerschneiden und mittels Translationen neu Zusammenkleben“ ineinander zu überführen. Die Gruppe GL_2(R) operiert auf der Menge der Translationsflächen via der linearen Abbildungen auf den Polygonen. Der Stabilisator einer Translationsfläche X unter dieser Operation wird die Veech-Gruppe von X genannt und mit SL(X) bezeichnet. Die Veech-Gruppe ist eine diskrete Untergruppe von SL_2(R) und damit eine Fuchs’sche Gruppe.
Fuchs’sche Gruppen werden je nach ihrer Limesmenge in elementare und nicht-elementare Gruppen eingeteilt. Letztere wiederum unterteilt man in Gruppen erster oder zweiter Art. Fuchs’sche Gruppen mit endlichem co-Volumen heißen Gitter und sind genau die endlich erzeugten Gruppen erster Art. Translationsflächen, deren Veech-Gruppe ein Gitter ist, heißen Veech-Flächen und sind von besonderem Interesse, da für sie die Veech Alternative gilt.
Ein feineres Maß für die Größe einer Fuchs’schen Gruppe ist der kritische Exponent. Er ist definiert als das Infimum aller reellen Zahlen, für die die Poincaré Reihe konvergiert und liegt für alle unendlichen Fuchs’schen Gruppen zwischen 0 und 1. Hauptziel der Dissertation ist der Beweis von Theorem 1. Es gibt Translationsflächen, für die der kritische Exponent ihrer Veech-Gruppe echt zwischen 1/2 und 1 liegt.
Der kritische Exponent von elementaren Gruppen ist höchstens 1/2, Translationsflächen mit elementaren Veech-Gruppen sind also als Kandidaten für das Theorem ausgeschlossen. Der kritische Exponent von Gittern ist 1. Also scheiden auch Veech-Flächen für das Theorem aus.
Bis zum Jahr 2003 waren Gitter die einzigen bekannten nicht-elementaren Veech-Gruppen. McMullen klassifizierte die Veech-Flächen vom Geschlecht 2 und zeigte, dass jede solche Fläche, die nur eine Singularität besitzt, in der GL_2(R)-Bahn der Fläche L_D liegt, die aus einem L-förmigen Polygon mit geeigneten von D abhängigen Seitenlängen entsteht.
Während auch heute noch keine Translationsfläche mit Veech-Gruppe zweiter Art bekannt ist, fanden McMullen und unabhängig davon Hubert und Schmidt Konstruktionen unendlich erzeugter Veech-Gruppen erster Art. Eine Abschätzung des kritischen Exponenten dieser Gruppen war 10 Jahre lang eine wichtige offene Frage, die nun durch Theorem 1 beantwortet wird.
Zentral in der Konstruktion von Hubert und Schmidt sind spezielle Punkte, nämlich Verbindungspunkte. Hubert und Schmidt konstruieren Translationsflächen, deren Veech-Gruppen kommensurabel zum Stabilisator SL(X;P) von P sind und damit den gleichen kritischen Exponenten haben. Für Verbindungspunkte mit unendlicher SL(X)- Bahn (diese Punkte heißen nicht-periodisch) ist SL(X;P) unendlich erzeugt und von erster Art.
Wir zeigen Theorem 1, indem wir zeigen, dass für jedes D kongruent 0 mod 4, (kein Quadrat), und jeden nicht-periodischen Verbindungspunkt P in L_D der kritische Exponent der Gruppe SL(L_D;P) echt zwischen 1/2 und 1 liegt.
Eine natürliche Frage in diesem Zusammenhang ist die Abhängigkeit von P: Punkte Q in der SL(L_D)-Bahn von P sind auch er nicht-periodische Verbindungspunkte und die zugehö̈rigen Gruppen SL(L_D;P) und SL(L_D;Q) sind konjugiert zueinander. Daher widmen wir uns in Kapitel 4 der Bestimmung der Bahnen nicht-periodischer Verbindungspunkte.
Die Verbindungspunkte haben die Form P=(x_r+x_iw;y_r+y_iw) mit x_r,x_i,y_r,y_i aus Q. Wir zeigen, dass der Hauptnenner N(P) dieser (gekürzten) Brüche eine Invariante der Bahn ist. Daraus folgt:
Theorem 2. Es gibt unendlich viele verschiedene Bahnen von Verbindungspunkten von L_D.
Wir kennen die Operation der horizontalen und der vertikalen Scherungen A und B aus SL(L_D). Im Spezialfall D=8 erzeugen diese beiden Elemente die ganze Gruppe und wir geben je ein Verfahren an, um eine untere und eine obere Schranke an die Anzahl der Bahnen von nicht-periodischen Verbindungspunkten P mit fixiertem Hauptnenner N(P) zu finden. Damit zeigen wir:
Theorem 3. Die Menge der Verbindungspunkte P mit festem Wert N(P) zerfällt in eine endliche Anzahl von SL(L_8)-Bahnen.
Im Beweis von Theorem 1 ist es nötig, die Nicht-Mittelbarkeit eines Graphen zu zeigen. Da wir nur sehr wenige Informationen über dessen Struktur in unserer konkreten Situation haben, entwickeln wir in Kapitel 1 die folgende Methode:
Theorem 4. Sei G ein Graph, den man durch Weglassen von Kanten in einen Wald G′ ohne Blätter überführen kann, bei dem das Supremum der Längen von zusammenhängenden Valenz-2-Teilgraphen von G′ beschränkt ist. Dann ist G nicht mittelbar.
Um diese Methode anzuwenden, ordnen wir jeder Ecke P von G ein Komplexitätsmaß s(P) zu und weisen nach, dass dieser Wert für die Operation von Worten in A- und B-Potenzen mit wachsender Wortlänge ”tendenziell wächst“.