Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (91)
- Article (58)
- Bachelor Thesis (17)
- Book (13)
- Master's Thesis (10)
- Conference Proceeding (4)
- Contribution to a Periodical (4)
- Habilitation (2)
- Preprint (2)
- Diploma Thesis (1)
Has Fulltext
- yes (202)
Is part of the Bibliography
- no (202) (remove)
Keywords
- Machine Learning (5)
- NLP (4)
- ALICE (3)
- Annotation (3)
- Machine learning (3)
- Text2Scene (3)
- TextAnnotator (3)
- Virtual Reality (3)
- mathematics education (3)
- Artificial intelligence (2)
- Blockchain (2)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- FPGA (2)
- MathCityMap (2)
- Natural Language Processing (2)
- Positive polynomials (2)
- Prostate cancer (2)
- Simulation (2)
- Sums of arithmetic-geometric exponentials (2)
- Tracking (2)
- algorithms (2)
- 11N45 (1)
- 14N10 (secondary) (1)
- 2-SAT (1)
- 30F30 (1)
- 32G15 (primary) (1)
- AI Safety (1)
- ALICE experiment (1)
- Ageing (1)
- Agent (1)
- Akademische Zertifikate (1)
- Algebraic Hodge polynomial (1)
- Algebraic number theory (1)
- Anabelian Geometry (1)
- Anemia management (1)
- Approximation Algorithms (1)
- Arithmetic-geometric exponentials (1)
- Augmented reality (1)
- Autonomous Driving (1)
- Autophagy (1)
- Autorensystem (1)
- BIOfid (1)
- Bayesian Persuasion (1)
- Bayesian Statistics (1)
- Belief Propagation (1)
- Bert (1)
- Bifurcation Theory (1)
- Big Data (1)
- Big Data Benchmarks (1)
- Biodiversity (1)
- Bioinformatik (1)
- Biological sciences (1)
- Blood loss calculator (1)
- Blood loss formula (1)
- Blood management (1)
- Boundary elements (1)
- Brownian motion (1)
- C++ (1)
- CBM (1)
- COGNIMUSE (1)
- COVID-19 pandemic (1)
- Calderón operator (1)
- Cannings model (1)
- Capital gains taxes (1)
- Changes in labor markets (1)
- Classification (1)
- Cognitive Maps (1)
- Cognitive Spatial Distortions (1)
- Convexity (1)
- Convolution quadrature (1)
- Curvature measure (1)
- Cycle class (1)
- DDC (1)
- DNN Robustness (1)
- Data Acquisition (1)
- Datenanalyse (1)
- Deep Learning (1)
- Delegated Search (1)
- Demuškin groups (1)
- Dewey Decimal Classification (1)
- Diagnostic markers (1)
- Diagramme und Mathematiklernen (1)
- Digitale Pathologie (1)
- Distributional super-solution (1)
- Docker (1)
- Dual cone (1)
- Educational texttechnology (1)
- Event Buffering (1)
- Exponential sums (1)
- External-memory graph algorithms (1)
- Failure Erasure Code (1)
- Finance (1)
- Finite elements (1)
- Finitely many measurements (1)
- Fractional Laplacian (1)
- Functional magnetic resonance imaging (1)
- Future of work (1)
- GABAergic (1)
- GPGPU (1)
- GPU (1)
- Gale-dual pairs (1)
- Gaussian Processes (1)
- Gesten beim Mathematiklernen (1)
- Gesten-Lautsprache-Relationen (1)
- Google Bert (1)
- Graph Neural Networks (1)
- Graph generation (1)
- Graphentheorie (1)
- Ground Texture (1)
- HLT (1)
- HPC (1)
- Hadron-hadron interactions (1)
- Hardy’s inequality (1)
- Heavy Ion experiments (1)
- Hidden Markov Model (1)
- High energy physics (1)
- High-Level-Trigger (1)
- Higher education (1)
- Historical Document Analysis (1)
- Hodge conjecture (1)
- Hopf boundary lemma (1)
- Human factors (1)
- Human-enhancing technologies (1)
- I/O efficiency (1)
- Immunology (1)
- Individual differences (1)
- Information Retrieval (1)
- Intelligence augmentation (1)
- Inter-annotator agreement (1)
- Inverse Problem (1)
- IsoSpace (1)
- Kalman Filter (1)
- Kapitalertragsteuern (1)
- K–12 (1)
- LDPC Codes (1)
- Lattice path matroids (1)
- Leapfrog (1)
- Learning analytics (1)
- Limit mixed Hodge structures (1)
- Linear regression analysis (1)
- Linpack (1)
- Lipschitz–Killing measures (1)
- Localization (1)
- Loewner order (1)
- Log convex sets (1)
- Lyapunov exponents (1)
- Many-core computer architectures (1)
- Mathematical biosciences (1)
- Mathematik (1)
- Mathematikdidaktik (1)
- Mathtrails (1)
- Mc Kean martingale (1)
- MediaEval 2016 (1)
- Mobile (1)
- Mobile Learning (1)
- Moduli space of semi-stable sheaves (1)
- Mollifier decorrelation (1)
- Mollifier multiscale reconstruction and decomposition (1)
- Monocular Scene Flow (1)
- Monotonicity (1)
- Multiparametric MRI (1)
- Multiplicative convexity (1)
- Named entity recognition (1)
- Networking (1)
- Neural Networks (1)
- Neural networks (1)
- Neuronales Netz (1)
- Neuroscience (1)
- Nodal curves (1)
- Non-Fungible-Token (1)
- Non-negativity certificate (1)
- Nonlinear Schrödinger equation (1)
- Nonlocal Neumann conditions (1)
- Nonlocal normal derivative (1)
- Nonlocal operators (1)
- Online Algorithms (1)
- OpenStreetMap (1)
- OpenStreetMap quality evaluation (1)
- Optimal stopping problem (1)
- Optimales Stoppproblem (1)
- Orbital stability (1)
- Parallel Computing (1)
- Parallel and SIMD calculations (1)
- Partial Differential Equations (1)
- Pedestrian Detection (1)
- Perfect graphs (1)
- Permutation (1)
- Podospora anserina (1)
- Pointwise super-solution (1)
- Polyhedron (1)
- Positive function (1)
- Positive signomials (1)
- Potential methods in exploration (1)
- Preclinical research (1)
- Prediction (1)
- Predictive markers (1)
- Processor (1)
- Prognostic markers (1)
- Protein-protein interaction (1)
- Pseudo-Riemannian manifolds (1)
- Public Administration (1)
- Public Transport (1)
- Quantitative features (1)
- RADIUS Protocol (1)
- Radiomics (1)
- Random CSP (1)
- Random Graphs (1)
- Random Matrices (1)
- Random graphs (1)
- Reflexive polytopes (1)
- Regional Laplacian (1)
- Regional fractional Laplacian (1)
- Reinforcement Learning (1)
- Relativistic heavy-ion collisions (1)
- Robotics (1)
- SIMD (1)
- SLAM (1)
- STAR (1)
- STAR experiment (1)
- STEM education (1)
- Script Compression (1)
- Second-order cone (1)
- Semantic portal (1)
- Semantics (1)
- Semiotik nach C. S. Peirce (1)
- Sensory perception (1)
- Sign-changing solutions (1)
- Signed Birkhoff polytopes (1)
- Simplicial complexes (1)
- Smartphone (1)
- Specialized information service (1)
- Spectral Theory (1)
- Standard monomials (1)
- Standing waves (1)
- Statistical analysis (1)
- Strange particles (1)
- Student expectations (1)
- Sublinear circuit (1)
- Sums of non-negative circuit polynomials (1)
- Sums of nonnegative circuit polynomials (SONC) (1)
- Surgical blood loss (1)
- Symmetries (1)
- Symmetry Breaking (1)
- TRD (1)
- TTLab (1)
- Taxon (1)
- Text Annotation (1)
- TextImager (1)
- Themenklassifikation (1)
- Thermoelastic wave equation (1)
- Tobler's First Law (1)
- Tokenisierung (1)
- Toxicity (1)
- Traffic Scenes (1)
- Transcriptome analysis (1)
- Translational research (1)
- Transparent boundary conditions (1)
- UIMA (1)
- Unconditional polytopes (1)
- Unimodular triangulations (1)
- Unity (1)
- UrQMD (1)
- Valuation (1)
- Vannotator (1)
- Variational Methods (1)
- Verkehr (1)
- Virtual reality (1)
- Virtuelle Realität (1)
- Vision (1)
- Visual cortex (1)
- Volunteered Geographic Information (1)
- Wavelet decomposition (1)
- Weak super-solution (1)
- Web (1)
- Web Based Training (1)
- Weyl principle (1)
- affective computing (1)
- algebraic thinking (1)
- algorithm engineering (1)
- anabelian geometry (1)
- ancestral selection graph (1)
- approximation algorithms (1)
- arithmetic geometry (1)
- autoregressive GANs (1)
- average-case complexity (1)
- barrel cortex (1)
- bioinformatics (1)
- bistable perception (1)
- catastrophic forgetting (1)
- central limit theorem (1)
- changepoint (1)
- chatbots (1)
- cluster computing (1)
- co-located collaboration analytics (1)
- coding theory (1)
- collaboration (1)
- collaboration analytics (1)
- computational thinking (1)
- computer vision (1)
- continual deep learning (1)
- convergence (1)
- cover times (1)
- data parallel (1)
- data structures (1)
- debugging (1)
- deep generative models (1)
- deformable model (1)
- density maps (1)
- density visualization (1)
- digital distractions (1)
- digital learning (1)
- digitization (1)
- directional selection (1)
- disaster risk management (1)
- discrepancy principle (1)
- distance learning (1)
- domains (1)
- dynamic algorithms (1)
- education (1)
- educational technology (1)
- emotion generation (1)
- emotion prediction (1)
- epilepsy, epileptogenesis, model, neuro-immune, neuroinflammation, blood brain barrier, seizure (1)
- equity and access to technology (1)
- erasure codes (1)
- error correction codes (1)
- event reconstruction (1)
- external memory (1)
- extreme value theory (1)
- field mapping (1)
- field papers (1)
- flood risk perception (1)
- flooding (1)
- fringe tree (1)
- fundamental theorem of asset pricing (1)
- generic tasks (1)
- graph theory (1)
- group speech analytics (1)
- hierarchical fields (1)
- high performance computing (1)
- independence number (1)
- information processing (1)
- information transfer (1)
- inquiry-based education (1)
- interactive data analysis (1)
- k-shortest path (1)
- literature review (1)
- machine learning (1)
- math trails (1)
- mathematics (1)
- media multitasking (1)
- mikroskopisch (1)
- multimodal (1)
- multimodal fusion (1)
- multimodal learning analytics (1)
- neural network decoder (1)
- neural networks (1)
- neural ordinary differential equation (1)
- neuronal morphology (1)
- neuroscience (1)
- no unbounded profit with bounded risk (1)
- octonions (1)
- online bayesian change point detection (1)
- open-set recognition (1)
- optimal coding (1)
- optimality (1)
- outdoor activities (1)
- outdoors (1)
- parallel file systems (1)
- parallel programming (1)
- patricia trie (1)
- pedagogical roles (1)
- phase coding (1)
- point inversion (1)
- point process (1)
- positivity preserving property (1)
- privacy (1)
- privacy-enhancing technologies (1)
- probability of fixation (1)
- problem solving (1)
- proportional transaction costs (1)
- protein assembly (1)
- protein structure (1)
- random energy model (1)
- random tree (1)
- real world problems ; (1)
- representation learning (1)
- respiratory complex I (1)
- sampling duality (1)
- satisfiability problem (1)
- section conjecture (1)
- security (1)
- security management (1)
- self-attention (1)
- self-control (1)
- self-regulation (1)
- semimartingales (1)
- shape prior (1)
- shortest path (1)
- social engineering (1)
- spectral cut-off (1)
- spike timing (1)
- spin group (1)
- statistical inverse problems (1)
- statistical shape analysis (1)
- stochastic integration (1)
- stochastic model (1)
- storage (1)
- sum-product algorithm (1)
- synaptogenesis (1)
- synchronous teaching (1)
- task design (1)
- teaching with technology (1)
- technology-enhanced learning (1)
- torsion function (1)
- transfer entropy (1)
- valuation (1)
- variational inference (1)
- vectorization (1)
- video prediction (1)
- visual programming (1)
- ÖPNV (1)
- 𝒮-cone (1)
Institute
Eine 1-1-Korrespondenz zwischen einer Klasse von Leftist-Bäumen und erweiterten t-nären Bäumen
(2006)
Leftist-Bäume sind eine Teilmenge der geordneten Bäume mit der Eigenschaft, daß der [kürzeste] Weg von jedem inneren Knoten zu einem Blatt des Teilbaums mit diesem Knoten als Wurzel immer über den am weitesten links stehenden Sohn dieses Knotens verläuft.
In der vorliegenden Arbeit wird eine 1-1-Korrespondenz zwischen erweiterten t-nären Bäumen und der Klasse der Leftist-Bäumen mit erlaubten Knotengraden 0, t, 2t-1, ... 1+t(t-1) präsentiert. Diese 1-1-Korrespondenz verallgemeinert ein Ergebnis von R. Kemp.
Embedding spanning structures into the random graph G(n,p) is a well-studied problem in random graph theory, but when one turns to the random r-uniform hypergraph H(r)(n,p) much less is known. In this thesis we will examine this topic from different perspectives, providing insights into various aspects of the theory of random graphs. Our results cover the determination of existence thresholds in two models, as well as an algorithmic approach. For the embeddings, we work with random and pseudorandom structures.
Together with Person we first notice that a general result of Riordan can be adapted from random graphs to hypergraphs and provide sufficient conditions for when H(r)(n,p) contains a given spanning structure asymptotically almost surely. As applications, we discuss several spanning structures such as cubes, lattices, spheres, and Hamilton cycles in hypergraphs.
Moreover, we study universality, i.e. when does an r-uniform hypergraph contain every hypergraph on n vertices with maximum vertex degree bounded by [delta]? For H(r)(n,p), it is shown with Person that this holds for p = w(ln n/n)1/[delta]) asymptotically almost surely by combining approaches taken by Dellamonica, Kohayakawa, Rödl, and Ruciński, of Ferber, Nenadov, and Peter, and of Kim and Lee.
Any hypergraph that is universal for the family of bounded degree r-uniform hypergraphs has to contain [omega](nr-r/[delta]) edges. With Hetterich and Person we exploit constructions of Alon and Capalbo to obtain universal r-uniform hypergraphs with the optimal number of edges O(nr-r/[delta]) when r is even, r | [delta], or [delta] = 2. Furthermore, we generalise the result of Alon and Asodi about optimal universal graphs for the family of graphs with at most m edges and no isolated vertices to hypergraphs.
In an r-uniform hypergraph on n vertices a tight Hamilton cycle consists of n edges such that there exists a cyclic ordering of the vertices where the edges correspond to consecutive segments of r vertices. In collaboration with Allen, Koch, and Person we provide a first deterministic polynomial time algorithm, which finds asymptotically almost surely tight Hamilton cycles in random r-uniform hypergraphs with edge probability at least C log3 n/n. This result partially answers a question of Nenadov and Skorić and of Dudek and Frieze who proved that tight Hamilton cycles exist already for p = w(1/n) for r = 3 and p [größer/gleich] (e + o(1))/n for r [größer/gleich] 4 using a second moment argument. Moreover our algorithm is superior to previous results of Allen, Böttcher, Kohayakawa, and Person and Nenadov and Skorić.
Lastly, we study the model of randomly perturbed dense graphs introduced by Bohman, Frieze and Martin, that is, the union of any n-vertex graph G[alpha] with minimum degree at least [alpha]n and G(n,p). For any fixed [alpha] > 0, and p = w(n-2/([delta]+1)), we show with Böttcher, Montgomery, and Person that G[alpha] UG(n,p) almost surely contains any single spanning graph with maximum degree [delta], where [delta] [größer/gleich] 5. As in previous results concerning this model, the bound used for p is lower by a log-term in comparison to the conjectured threshold for the general appearance of such subgraphs in G(n,p) alone. The new techniques we introduce also give simpler proofs of related results in the literature on trees and factors.
Measuring information processing in neural data: The application of transfer entropy in neuroscience
(2017)
It is a common notion in neuroscience research that the brain and neural systems in general "perform computations" to generate their complex, everyday behavior (Schnitzer, 2002). Understanding these computations is thus an important step in understanding neural systems as a whole (Carandini, 2012;Clark, 2013; Schnitzer, 2002; de-Wit, 2016). It has been proposed that one way to analyze these computations is by quantifying basic information processing operations necessary for computation, namely the transfer, storage, and modification of information (Langton, 1990; Mitchell, 2011; Mitchell, 1993;Wibral, 2015). A framework for the analysis of these operations has been emerging (Lizier2010thesis), using measures from information theory (Shannon, 1948) to analyze computation in arbitrary information processing systems (e.g., Lizier, 2012b). Of these measures transfer entropy (TE) (Schreiber2000), a measure of information transfer, is the most widely used in neuroscience today (e.g., Vicente, 2011; Wibral, 2011; Gourevitch, 2007; Vakorin, 2010; Besserve, 2010; Lizier, 2011; Richter, 2016; Huang, 2015; Rivolta, 2015; Roux, 2013). Yet, despite this popularity, open theoretical and practical problems in the application of TE remain (e.g., Vicente, 2011; Wibral, 2014a). The present work addresses some of the most prominent of these methodological problems in three studies.
The first study presents an efficient implementation for the estimation of TE from non-stationary data. The statistical properties of non-stationary data are not invariant over time such that TE can not be easily estimated from these observations. Instead, necessary observations can be collected over an ensemble of data, i.e., observations of physical or temporal replications of the same process (Gomez-Herrero, 2010). The latter approach is computationally more demanding than the estimation from observations over time. The present study demonstrates how to handles this increased computational demand by presenting a highly-parallel implementation of the estimator using graphics processing units.
The second study addresses the problem of estimating bivariate TE from multivariate data. Neuroscience research often investigates interactions between more than two (sub-)systems. It is common to analyze these interactions by iteratively estimating TE between pairs of variables, because a fully multivariate approach to TE-estimation is computationally intractable (Lizier, 2012a; Das, 2008; Welch, 1982). Yet, the estimation of bivariate TE from multivariate data may yield spurious, false-positive results (Lizier, 2012a;Kaminski, 2001; Blinowska, 2004). The present study proposes that such spurious links can be identified by characteristic coupling-motifs and the timings of their information transfer delays in networks of bivariate TE-estimates. The study presents a graph-algorithm that detects these coupling motifs and marks potentially spurious links. The algorithm thus partially corrects for spurious results due to multivariate effects and yields a more conservative approximation of the true network of multivariate information transfer.
The third study investigates the TE between pre-frontal and primary visual cortical areas of two ferrets under different levels of anesthesia. Additionally, the study investigates local information processing in source and target of the TE by estimating information storage (Lizier, 2012) and signal entropy. Results of this study indicate an alternative explanation for the commonly observed reduction in TE under anesthesia (Imas, 2005; Ku, 2011; Lee, 2013; Jordan, 2013; Untergehrer, 2014), which is often explained by changes in the underlying coupling between areas. Instead, the present study proposes that reduced TE may be due to a reduction in information generation measured by signal entropy in the source of TE. The study thus demonstrates how interpreting changes in TE as evidence for changes in causal coupling may lead to erroneous conclusions. The study further discusses current bast-practice in the estimation of TE, namely the use of state-of-the-art estimators over approximative methods and the use of optimization procedures for estimation parameters over the use of ad-hoc choices. It is demonstrated how not following this best-practice may lead to over- or under-estimation of TE or failure to detect TE altogether.
In summary, the present work proposes an implementation for the efficient estimation of TE from non-stationary data, it presents a correction for spurious effects in bivariate TE-estimation from multivariate data, and it presents current best-practice in the estimation and interpretation of TE. Taken together, the work presents solutions to some of the most pressing problems of the estimation of TE in neuroscience, improving the robust estimation of TE as a measure of information transfer in neural systems.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
Urn models are simple examples for random growth processes that involve various competing types. In the study of these schemes, one is generally interested in the impact of the specific form of interaction on the allocation of elements to the types. Depending on their reciprocal action, effects of cancellation and self-reinforcement become apparent in the long run of the system. For some urn models, the influencing is of a smoothing nature and the asymptotic allocation to the types is close to being a result of independent and identically distributed growth events. On the contrary, for others, almost sure random tendencies or logarithmically periodic terms emerge in the second growth order. The present thesis is devoted to the derivation of central limit theorems in the latter case. For urns of this kind, we use a "non-classical" normalisation to derive asymptotic joint normality of the types. This normalisation takes random tendencies and phases into account and consequently involves random centering and, also, possibly random scaling.
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
For the class of balanced, irreducible Pólya urn schemes with two colours, say black and white, limit theorems for the number of black balls after n steps are known. Depending on the ratio of the eigenvalues of the replacement matrix, two regimes of limit laws occur: almost sure convergence to a non-degenerate random variable whose distribution depends on the initial composition of the urn and that is known to be not normally distributed and weak convergence to the normal distribution. In this thesis, upper bounds on the rates of convergence in both the non-normal limit case and the normal limit case are given.
Recently, Aumüller and Dietzfelbinger proposed a version of a dual-pivot Quicksort, called "Count", which is optimal among dual-pivot versions with respect to the average number of key comparisons required. In this master's thesis we provide further probabilistic analysis of "Count". We derive an exact formula for the average number of swaps needed by "Count" as well as an asymptotic formula for the variance of the number of swaps and a limit law. Also for the number of key comparisons the asymptotic variance and a limit law are identified. We also consider both complexity measures jointly and find their asymptotic correlation.
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on the measurements of very rare probes, which require the experiment to operate under extreme interaction rates of up to 10 MHz. Due to high multiplicity of charged particles in heavy-ion collisions, this will lead to the data rates of up to 1 TB/s. In order to meet the modern achievable archival rate, this data ow has to be reduced online by more than two orders of magnitude.
The rare observables are featured with complicated trigger signatures and require full event topology reconstruction to be performed online. The huge data rates together with the absence of simple hardware triggers make traditional latency limited trigger architectures typical for conventional experiments inapplicable for the case of CBM. Instead, CBM will employ a novel data acquisition concept with autonomous, self-triggered front-end electronics.
While in conventional experiments with event-by-event processing the association of detector hits with corresponding physical event is known a priori, it is not true for the CBM experiment, where the reconstruction algorithms should be modified in order to process non-event-associated data. At the highest interaction rates the time difference between hits belonging to the same collision will be larger than the average time difference between two consecutive collisions. Thus, events will overlap in time. Due to a possible overlap of events one needs to analyze time-slices rather than isolated events.
The time-stamped data will be shipped and collected into a readout buffer in a form of a time-slice of a certain length. The time-slice data will be delivered to a large computer farm, where the archival decision will be obtained after performing online reconstruction. In this case association of hit information with physical events must be performed in software and requires full online event reconstruction not only in space, but also in time, so-called 4-dimensional (4D) track reconstruction.
Within the scope of this work the 4D track finder algorithm for online reconstruction has been developed. The 4D CA track finder is able to reproduce performance and speed of the traditional event-based algorithm. The 4D CA track finder is both vectorized (using SIMD instructions) and parallelized (between CPU cores). The algorithm shows strong scalability on many-core systems. The speed-up factor of 10.1 has been achieved on a CPU with 10 hyper-threaded physical cores.
The 4D CA track finder algorithm is ready for the time-slice-based reconstruction in the CBM experiment.