510 Mathematik
Refine
Year of publication
- 2017 (16) (remove)
Document Type
- Doctoral Thesis (7)
- Article (6)
- Book (1)
- Contribution to a Periodical (1)
- Master's Thesis (1)
Has Fulltext
- yes (16)
Is part of the Bibliography
- no (16)
Keywords
- ALICE (2)
- FPGA (2)
- Ageing (1)
- Autophagy (1)
- Autorschaft (1)
- Brownian motion (1)
- Cellular Automaton (1)
- Data Acquisition (1)
- Elementar- und Primarbereich (1)
- Event Buffering (1)
Institute
- Informatik und Mathematik (7)
- Mathematik (7)
- Präsidium (2)
- Extern (1)
As an integral part of ALICE, the dedicated heavy ion experiment at CERN’s Large Hadron Collider, the Transition Radiation Detector (TRD) contributes to the experiment’s tracking, triggering and particle identification. Central element in the TRD’s processing chain is its trigger and readout processor, the Global Tracking Unit (GTU). The GTU implements fast triggers on various signatures, which rely on the reconstruction of up to 20 000 particle track segments to global tracks, and performs the buffering and processing of event raw data as part of a complex detector readout tree.
The high data rates the system has to handle and its dual use as trigger and readout processor with shared resources and interwoven processing paths require the GTU to be a unique, high-performance parallel processing system. To achieve high data taking efficiency, all elements of the GTU are optimized for high running stability and low dead time.
The solutions presented in this thesis for the handling of readout data in the GTU, from the initial reception to the final assembly and transmission to the High-Level Trigger computer farm, address all these aspects. The presented concepts employ multi-event buffering, in-stream data processing, extensive embedded diagnostics, and advanced features of modern FPGAs to build a robust high-performance system that can conduct the high- bandwidth readout of the TRD with maximum stability and minimized dead time. The work summarized here not only includes the complete process from the conceptual layout of the multi-event data handling and segment control, but also its implementation, simulation, verification, operation and commissioning. It also covers the system upgrade for the second data taking period and presents an analysis of the actual system performance.
The presented design of the GTU’s input stage, which is comprised of 90 FPGA-based nodes, is built to support multi-event buffering for the data received from the 18 TRD supermodules on 1080 optical links at the full sender aggregate net bandwidth of 2.16 Tbit/s. With careful design of the control logic and the overall data path, the readout on the 18 concentrator nodes of the supermodule stage can utilize an effective aggregate output bandwidth of initially 3.33 GiB/s, and, after the successful readout bandwidth upgrade, 6.50 GiB/s via 18 optical links. The high possible readout link utilization of more than 99 % and the intermediate buffering of events on the GTU helps to keep the dead time associated with the local event building and readout typically below 10%. The GTU has been used for production data taking since start-up of the experiment and ever since performs the event buffering, local event building and readout for the TRD in a correct, efficient and highly dependable fashion.
Der Artikel stellt aktuelle stilometrische Studien im Delta-Kontext vor. Diskutiert wird, warum die Verwendung des Kosinus-Abstands zu einer Verbesserung der Erfolgsquote führt; durch Experimente zur Vektornormalisierung gelingt es, die Funktionsweise von Delta besser zu verstehen. Anhand von mittelhochdeutschen Texten wird gezeigt, dass auch metrische Eigenschaften zur Autorschaftsattribution eingesetzt werden können. Zudem wird untersucht, inwieweit die mittelalterliche, nicht-normierte Schreibung die Erfolgsquote von Delta beeinflusst. Am Beispiel von arabisch-lateinischen Übersetzungen wird geprüft, inwieweit eine selektive Merkmalseliminierung dazu beitragen kann, das Übersetzersignal vom Genresignal zu isolieren.
Embedding spanning structures into the random graph G(n,p) is a well-studied problem in random graph theory, but when one turns to the random r-uniform hypergraph H(r)(n,p) much less is known. In this thesis we will examine this topic from different perspectives, providing insights into various aspects of the theory of random graphs. Our results cover the determination of existence thresholds in two models, as well as an algorithmic approach. For the embeddings, we work with random and pseudorandom structures.
Together with Person we first notice that a general result of Riordan can be adapted from random graphs to hypergraphs and provide sufficient conditions for when H(r)(n,p) contains a given spanning structure asymptotically almost surely. As applications, we discuss several spanning structures such as cubes, lattices, spheres, and Hamilton cycles in hypergraphs.
Moreover, we study universality, i.e. when does an r-uniform hypergraph contain every hypergraph on n vertices with maximum vertex degree bounded by [delta]? For H(r)(n,p), it is shown with Person that this holds for p = w(ln n/n)1/[delta]) asymptotically almost surely by combining approaches taken by Dellamonica, Kohayakawa, Rödl, and Ruciński, of Ferber, Nenadov, and Peter, and of Kim and Lee.
Any hypergraph that is universal for the family of bounded degree r-uniform hypergraphs has to contain [omega](nr-r/[delta]) edges. With Hetterich and Person we exploit constructions of Alon and Capalbo to obtain universal r-uniform hypergraphs with the optimal number of edges O(nr-r/[delta]) when r is even, r | [delta], or [delta] = 2. Furthermore, we generalise the result of Alon and Asodi about optimal universal graphs for the family of graphs with at most m edges and no isolated vertices to hypergraphs.
In an r-uniform hypergraph on n vertices a tight Hamilton cycle consists of n edges such that there exists a cyclic ordering of the vertices where the edges correspond to consecutive segments of r vertices. In collaboration with Allen, Koch, and Person we provide a first deterministic polynomial time algorithm, which finds asymptotically almost surely tight Hamilton cycles in random r-uniform hypergraphs with edge probability at least C log3 n/n. This result partially answers a question of Nenadov and Skorić and of Dudek and Frieze who proved that tight Hamilton cycles exist already for p = w(1/n) for r = 3 and p [größer/gleich] (e + o(1))/n for r [größer/gleich] 4 using a second moment argument. Moreover our algorithm is superior to previous results of Allen, Böttcher, Kohayakawa, and Person and Nenadov and Skorić.
Lastly, we study the model of randomly perturbed dense graphs introduced by Bohman, Frieze and Martin, that is, the union of any n-vertex graph G[alpha] with minimum degree at least [alpha]n and G(n,p). For any fixed [alpha] > 0, and p = w(n-2/([delta]+1)), we show with Böttcher, Montgomery, and Person that G[alpha] UG(n,p) almost surely contains any single spanning graph with maximum degree [delta], where [delta] [größer/gleich] 5. As in previous results concerning this model, the bound used for p is lower by a log-term in comparison to the conjectured threshold for the general appearance of such subgraphs in G(n,p) alone. The new techniques we introduce also give simpler proofs of related results in the literature on trees and factors.
Measuring information processing in neural data: The application of transfer entropy in neuroscience
(2017)
It is a common notion in neuroscience research that the brain and neural systems in general "perform computations" to generate their complex, everyday behavior (Schnitzer, 2002). Understanding these computations is thus an important step in understanding neural systems as a whole (Carandini, 2012;Clark, 2013; Schnitzer, 2002; de-Wit, 2016). It has been proposed that one way to analyze these computations is by quantifying basic information processing operations necessary for computation, namely the transfer, storage, and modification of information (Langton, 1990; Mitchell, 2011; Mitchell, 1993;Wibral, 2015). A framework for the analysis of these operations has been emerging (Lizier2010thesis), using measures from information theory (Shannon, 1948) to analyze computation in arbitrary information processing systems (e.g., Lizier, 2012b). Of these measures transfer entropy (TE) (Schreiber2000), a measure of information transfer, is the most widely used in neuroscience today (e.g., Vicente, 2011; Wibral, 2011; Gourevitch, 2007; Vakorin, 2010; Besserve, 2010; Lizier, 2011; Richter, 2016; Huang, 2015; Rivolta, 2015; Roux, 2013). Yet, despite this popularity, open theoretical and practical problems in the application of TE remain (e.g., Vicente, 2011; Wibral, 2014a). The present work addresses some of the most prominent of these methodological problems in three studies.
The first study presents an efficient implementation for the estimation of TE from non-stationary data. The statistical properties of non-stationary data are not invariant over time such that TE can not be easily estimated from these observations. Instead, necessary observations can be collected over an ensemble of data, i.e., observations of physical or temporal replications of the same process (Gomez-Herrero, 2010). The latter approach is computationally more demanding than the estimation from observations over time. The present study demonstrates how to handles this increased computational demand by presenting a highly-parallel implementation of the estimator using graphics processing units.
The second study addresses the problem of estimating bivariate TE from multivariate data. Neuroscience research often investigates interactions between more than two (sub-)systems. It is common to analyze these interactions by iteratively estimating TE between pairs of variables, because a fully multivariate approach to TE-estimation is computationally intractable (Lizier, 2012a; Das, 2008; Welch, 1982). Yet, the estimation of bivariate TE from multivariate data may yield spurious, false-positive results (Lizier, 2012a;Kaminski, 2001; Blinowska, 2004). The present study proposes that such spurious links can be identified by characteristic coupling-motifs and the timings of their information transfer delays in networks of bivariate TE-estimates. The study presents a graph-algorithm that detects these coupling motifs and marks potentially spurious links. The algorithm thus partially corrects for spurious results due to multivariate effects and yields a more conservative approximation of the true network of multivariate information transfer.
The third study investigates the TE between pre-frontal and primary visual cortical areas of two ferrets under different levels of anesthesia. Additionally, the study investigates local information processing in source and target of the TE by estimating information storage (Lizier, 2012) and signal entropy. Results of this study indicate an alternative explanation for the commonly observed reduction in TE under anesthesia (Imas, 2005; Ku, 2011; Lee, 2013; Jordan, 2013; Untergehrer, 2014), which is often explained by changes in the underlying coupling between areas. Instead, the present study proposes that reduced TE may be due to a reduction in information generation measured by signal entropy in the source of TE. The study thus demonstrates how interpreting changes in TE as evidence for changes in causal coupling may lead to erroneous conclusions. The study further discusses current bast-practice in the estimation of TE, namely the use of state-of-the-art estimators over approximative methods and the use of optimization procedures for estimation parameters over the use of ad-hoc choices. It is demonstrated how not following this best-practice may lead to over- or under-estimation of TE or failure to detect TE altogether.
In summary, the present work proposes an implementation for the efficient estimation of TE from non-stationary data, it presents a correction for spurious effects in bivariate TE-estimation from multivariate data, and it presents current best-practice in the estimation and interpretation of TE. Taken together, the work presents solutions to some of the most pressing problems of the estimation of TE in neuroscience, improving the robust estimation of TE as a measure of information transfer in neural systems.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
Gleichungen mit mehreren Unbekannten zu lösen, üben Schüler schon in der Mittelstufe. Für die einen ist es eine spannende mathematische Knobelei, für die anderen eher Quälerei. Doch den wenigsten ist bewusst, wie viele Leben dadurch jeden Tag gerettet werden. Die moderne medizinische Bildgebung beruht darauf, sehr viele Gleichungen nach sehr vielen Unbekannten aufzulösen.
Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to potential underlying mechanisms.
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.