Refine
Year of publication
Document Type
- Doctoral Thesis (85)
- Article (55)
- Bachelor Thesis (7)
- Master's Thesis (6)
- Conference Proceeding (4)
- Habilitation (2)
- Preprint (2)
Language
- English (161) (remove)
Has Fulltext
- yes (161)
Is part of the Bibliography
- no (161) (remove)
Keywords
- ALICE (3)
- Machine Learning (3)
- mathematics education (3)
- Artificial intelligence (2)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- FPGA (2)
Institute
- Informatik und Mathematik (161) (remove)
Principles of cognitive maps
(2021)
This thesis analyses the concept of a cognitive map in the research fields of geography. Cognitive mapping research is essential as it investigates the relations between cognitive maps and external representations of space that people regularly use by acquiring spatial knowledge, such as maps in geographic information systems. Moreover, cognitive maps, when expanded on semantic maps, explain the relations between people and things in a non-physically environment, where the considered space is not spanned by distance but with other non-spatially variables. Nevertheless, cognitive maps are often distorted. Although a good formation of a cognitive map is vital in navigation processes, cognitive distortions are barely investigated in the field of geography. By analyzing the relevant work, especially Tobler’s first law of geography, a new lexical variant of Tobler’s first law could be stated that could presumably describe a specific distortion in the processing of landmarks in cognitive maps.
Neuropsychiatric disorders are complex, highly heritable but incompletely understood disorders. The clinical and genetic heterogeneity of these disorders poses a significant challenge to the identification of disorder related biomarkers. Besides significant progress in unveiling the genetic basis of these disorders, the underlying causes and biological mechanisms remain obscure. With the advancement in the array, sequencing, and big data technologies, a huge amount of data is generated from individuals across different platforms and in various data structures. But there is a paucity of bioinformatics tools that can integrate this plethora of data. Therefore, there is a need to develop an integrative bioinformatics data analysis tool that combines biological and clinical data from different data types to better understand the underlying genetics.
This thesis presents a bioinformatics pipeline implementing data from different platforms to provide a thorough understanding of the genetic etiology of a neuropsychiatric quantitative as well as a qualitative trait of interest. Throughout the thesis, we present two aspects: one is the development and architecture of the bioinformatics pipeline named MApping the Genetics of neuropsychiatric traits to the molecular NETworks of the human brain (MAGNET). The other part demonstrates the implementation and usefulness of MAGNET analysing large Autism Spectrum Disorder (ASD) cohorts.
MAGNET is a freely available command-line tool available on GitHub (https://github.com/SheenYo/MAGNET). It is implemented within one framework using data integration approaches based on state-of-the-art algorithms and software to ultimately identify the genes and pathways genetically associated with a trait of interest. MAGNET provides an edge over the existing tools since it performs a comprehensive analysis taking care of the data handling and parsing steps necessary to communicate between the different APIs (Application Program Interface). Thus, this avoids the in-between data handling steps required by researchers to provide output from one analysis to the next. Moreover, depending on the size of the dataset users can deduce important information regarding their trait of interest within a time frame of a few days. Besides gaining insights into genetic associations, one of the central features is the mapping of the associated genes onto developing human brain implementing transcriptome data of 16 different brain regions starting from the 5th post-conceptional week to over 40 years of age.
In the second part as proof of concept, we implemented MAGNET on two ASD cohorts. ASD is a group of psychiatric disorders. Clinically, ASD is characterized by the following psychopathology: A) limitations in social interaction and communication, and B) restricted, repetitive behavior. The etiology of this disorder is extremely complex due to its heterogeneous clinical traits and genetics. Therefore, to date, no reliable biomarkers are identified. Here, the aim is to characterize the genetic architecture of ASD taking into account the two aforementioned ASD diagnostic domains. As well as to investigate if these domains are genetically linked or independent of each other. Moreover, we addressed the question if these traits share genetic risk with the categorical diagnosis of ASD and how much of the phenotypic variance of these traits can be explained by the underlying genetics.
We included affected individuals from two ASD cohorts, i.e. the Autism Genome Project (AGP) and a German cohort consisting of 2,735 and 705 families respectively. MAGNET was applied to each of the ASD subdomains as a quantitative dependent variable. MAGNET is divided into five main sections i.e. (1) quality check of the genotype data, (2) imputation of missing genotype data, (3) association analysis of genotype and trait data, (4) gene-based analysis, and (5) enrichment analysis using gene expression data from the human brain.
MAGNET was applied to each of the individual traits in each cohort to perform quality control of the genetic data and imputed the missing data in an automated fashion. MAGNET identified 292 known and new ASD risk genes. These genes were subsequently assigned to biological signaling pathways and gene ontologies via MAGNET. The underlying biological mechanisms converged with respect to neuronal transmission and development processes. By reconciling these genes with the transcriptome of the developing human brain, MAGNET was able to identify that the significant genes associated with the subdomains are expressed at specific time points in brain areas such as the hippocampus, amygdala, and cortical regions. Further, we found that ASD subdomains related to domain A but not
to domain B have a shared genetic etiology.
Measuring information processing in neural data: The application of transfer entropy in neuroscience
(2017)
It is a common notion in neuroscience research that the brain and neural systems in general "perform computations" to generate their complex, everyday behavior (Schnitzer, 2002). Understanding these computations is thus an important step in understanding neural systems as a whole (Carandini, 2012;Clark, 2013; Schnitzer, 2002; de-Wit, 2016). It has been proposed that one way to analyze these computations is by quantifying basic information processing operations necessary for computation, namely the transfer, storage, and modification of information (Langton, 1990; Mitchell, 2011; Mitchell, 1993;Wibral, 2015). A framework for the analysis of these operations has been emerging (Lizier2010thesis), using measures from information theory (Shannon, 1948) to analyze computation in arbitrary information processing systems (e.g., Lizier, 2012b). Of these measures transfer entropy (TE) (Schreiber2000), a measure of information transfer, is the most widely used in neuroscience today (e.g., Vicente, 2011; Wibral, 2011; Gourevitch, 2007; Vakorin, 2010; Besserve, 2010; Lizier, 2011; Richter, 2016; Huang, 2015; Rivolta, 2015; Roux, 2013). Yet, despite this popularity, open theoretical and practical problems in the application of TE remain (e.g., Vicente, 2011; Wibral, 2014a). The present work addresses some of the most prominent of these methodological problems in three studies.
The first study presents an efficient implementation for the estimation of TE from non-stationary data. The statistical properties of non-stationary data are not invariant over time such that TE can not be easily estimated from these observations. Instead, necessary observations can be collected over an ensemble of data, i.e., observations of physical or temporal replications of the same process (Gomez-Herrero, 2010). The latter approach is computationally more demanding than the estimation from observations over time. The present study demonstrates how to handles this increased computational demand by presenting a highly-parallel implementation of the estimator using graphics processing units.
The second study addresses the problem of estimating bivariate TE from multivariate data. Neuroscience research often investigates interactions between more than two (sub-)systems. It is common to analyze these interactions by iteratively estimating TE between pairs of variables, because a fully multivariate approach to TE-estimation is computationally intractable (Lizier, 2012a; Das, 2008; Welch, 1982). Yet, the estimation of bivariate TE from multivariate data may yield spurious, false-positive results (Lizier, 2012a;Kaminski, 2001; Blinowska, 2004). The present study proposes that such spurious links can be identified by characteristic coupling-motifs and the timings of their information transfer delays in networks of bivariate TE-estimates. The study presents a graph-algorithm that detects these coupling motifs and marks potentially spurious links. The algorithm thus partially corrects for spurious results due to multivariate effects and yields a more conservative approximation of the true network of multivariate information transfer.
The third study investigates the TE between pre-frontal and primary visual cortical areas of two ferrets under different levels of anesthesia. Additionally, the study investigates local information processing in source and target of the TE by estimating information storage (Lizier, 2012) and signal entropy. Results of this study indicate an alternative explanation for the commonly observed reduction in TE under anesthesia (Imas, 2005; Ku, 2011; Lee, 2013; Jordan, 2013; Untergehrer, 2014), which is often explained by changes in the underlying coupling between areas. Instead, the present study proposes that reduced TE may be due to a reduction in information generation measured by signal entropy in the source of TE. The study thus demonstrates how interpreting changes in TE as evidence for changes in causal coupling may lead to erroneous conclusions. The study further discusses current bast-practice in the estimation of TE, namely the use of state-of-the-art estimators over approximative methods and the use of optimization procedures for estimation parameters over the use of ad-hoc choices. It is demonstrated how not following this best-practice may lead to over- or under-estimation of TE or failure to detect TE altogether.
In summary, the present work proposes an implementation for the efficient estimation of TE from non-stationary data, it presents a correction for spurious effects in bivariate TE-estimation from multivariate data, and it presents current best-practice in the estimation and interpretation of TE. Taken together, the work presents solutions to some of the most pressing problems of the estimation of TE in neuroscience, improving the robust estimation of TE as a measure of information transfer in neural systems.
Chatbots are a promising technology with the potential to enhance workplaces and everyday life. In terms of scalability and accessibility, they also offer unique possibilities as communication and information tools for digital learning. In this paper, we present a systematic literature review investigating the areas of education where chatbots have already been applied, explore the pedagogical roles of chatbots, the use of chatbots for mentoring purposes, and their potential to personalize education. We conducted a preliminary analysis of 2,678 publications to perform this literature review, which allowed us to identify 74 relevant publications for chatbots’ application in education. Through this, we address five research questions that, together, allow us to explore the current state-of-the-art of this educational technology. We conclude our systematic review by pointing to three main research challenges: 1) Aligning chatbot evaluations with implementation objectives, 2) Exploring the potential of chatbots for mentoring students, and 3) Exploring and leveraging adaptation capabilities of chatbots. For all three challenges, we discuss opportunities for future research.
Proteins are biological macromolecules playing essential roles in all living organisms.
Proteins often bind with each other forming complexes to fulfill their function. Such protein complexes assemble along an ordered pathway. An assembled protein complex can often be divided into structural and functional modules. Knowing the order of assembly and the modules of a protein complex is important to understand biological processes and treat diseases related to misassembly.
Typical structures of the Protein Data Bank (PDB) contain two to three subunits and a few thousand atoms. Recent developments have led to large protein complexes being resolved. The increasing number and size of the protein complexes demand for computational assistance for the visualization and analysis. One such large protein complex is respiratory complex I accounting for 45 subunits in Homo sapiens.
Complex I is a well understood protein complex that served as case study to validate our methods.
Our aim was to analyze time-resolved Molecular Dynamics (MD) simulation data, identify modules of a protein complex and generate hypotheses for the assembly pathway of a protein complex. For that purpose, we abstracted the topology of protein complexes to Complex Graphs of the Protein Topology Graph Library (PTGL). The subunits are represented as vertices, and spatial contacts as edges. The edges are weighted with the number of contacts based on a distance threshold. This allowed us to apply graph-theoretic methods to visualize and analyze protein complexes.
We extended the implementations of two methods to achieve a computation of Complex Graphs in feasible runtimes. The first method skipped checks for contacts using the information which residues are sequential neighbors. We extended the method to protein complexes and structures containing ligands. The second method introduced spheres encompassing all atoms of a subunit and skipped the check for contacts if the corresponding spheres do not overlap. Both methods combined allowed skipping up to 93 % of the checks for contacts for sample complexes of 40 subunits compared to up to 10 % of the previous implementation. We showed that the runtime of the combined method scaled linearly with the number of atoms compared to a non-linear scaling of the previous implementation We implemented a third method fixing the assignment of an orientation to secondary structure elements. We placed a three-dimensional vector in each secondary structure element and computed the angle between secondary structure elements to assign an orientation. This method sped up the runtime especially for large structures, such as the capsid of human immunodeficiency virus, for which the runtime decreased from 43 to less than 9 hours.
The feasible runtimes allowed us to investigate two data sets of MD trajectories of respiratory complex I of Thermus thermophilus that we received. The data sets differ only by whether ubiquinone is bound to the complex. We implemented a pipeline, PTGLdynamics, to compute the contacts and Complex Graphs for all time steps of the trajectories. We investigated different methods to track changes of contacts during the simulation and created a heat map put onto the three-dimensional structure visualizing the changes. We also created line plots to visualize the changes of contacts over the course of the simulation. Both visualizations helped spotting outstandingly flexible or rigid regions of the structure or time points of the simulation in which major dynamics occur.
We introduced normalizations of the edge weights of Complex Graphs for identi-fying modules and predicting the assembly pathway. The idea is to normalize the number of contacts for the number of residues of a subunit. We defined five different normalizations.
To identify structural and functional modules, we applied the Leiden graph clustering algorithm to the Complex Graphs of respiratory complex I and the respiratory supercomplex. We examined the results for the different normalizations of the weights of the Complex Graphs. The absolute edge weight produced the best result identifying three of four modules that have been defined in the literature for respiratory complex I.
We applied agglomerative hierarchical clustering to the edges of a Complex Graph to create hypotheses of the assembly pathway. The rationale was that subunits with an extensive interface in the final structure assemble early. We tested our method against two existing methods on a data set of 21 proteins with reported assembly pathways. Our prediction outperformed the other methods and ran in feasible runtimes of a few minutes at most.
We also tested our method on respiratory complex I, the respiratory supercomplex and the respiratory megacomplex. We compared the results for the different normalizations with an assembly pathway of respiratory complex I described in the literature. We transformed the assembly pathways to dendrograms and compared the predictions to the reference using the Robinson-Foulds distance and clustering information distance. We analyzed the landscape of the clustering information distance by generating random dendrograms and showed that our result is far better than expected at random. We showed in a detailed analysis that the assembly prediction using one normalization was able to capture key features of the assembly pathway that has been proposed in the literature.
In conclusion, we presented different applications of graph theory to automatically analyze the topology of protein complexes. Our programs run in feasible runtimes even for large complexes. We showed that graph-theoretic modeling of the protein structure can be used to analyze MD simulation data, identify modules of protein complexes and predict assembly pathways.
An exploratory latent class analysis of student expectations towards learning analytics services
(2021)
For service implementations to be widely adopted, it is necessary for the expectations of the key stakeholders to be considered. Failure to do so may lead to services reflecting ideological gaps, which will inadvertently create dissatisfaction among its users. Learning analytics research has begun to recognise the importance of understanding the student perspective towards the services that could be potentially offered; however, student engagement remains low. Furthermore, there has been no attempt to explore whether students can be segmented into different groups based on their expectations towards learning analytics services. In doing so, it allows for a greater understanding of what is and is not expected from learning analytics services within a sample of students. The current exploratory work addresses this limitation by using the three-step approach to latent class analysis to understand whether student expectations of learning analytics services can clearly be segmented, using self-report data obtained from a sample of students at an Open University in the Netherlands. The findings show that student expectations regarding ethical and privacy elements of a learning analytics service are consistent across all groups; however, those expectations of service features are quite variable. These results are discussed in relation to previous work on student stakeholder perspectives, policy development, and the European General Data Protection Regulation (GDPR).
We establish weighted Lp-Fourier extension estimates for O(N−k)×O(k)-invariant functions defined on the unit sphere SN−1, allowing for exponents p below the Stein–Tomas critical exponent 2(N+1)/N−1. Moreover, in the more general setting of an arbitrary closed subgroup G⊂O(N) and G-invariant functions, we study the implications of weighted Fourier extension estimates with regard to boundedness and nonvanishing properties of the corresponding weighted Helmholtz resolvent operator. Finally, we use these properties to derive new existence results for G-invariant solutions to the nonlinear Helmholtz equation −Δu−u = Q(x)|u|p−2u,u∈W2,p(RN), where Q is a nonnegative bounded and G-invariant weight function.
Spin(9)-invariant valuations
(2013)
The first aim of this thesis is to give a Hadwiger-type theorem for the exceptional Lie group Spin(9). The space of Spin(9)-invariant k-homogeneous valuations is studied through the construction of an exact sequence involving some spaces of differential forms. We present then a description of the spin representation using the properties of the 8-dimensional division algebra of the octonions. Using this description as well as representation-theoretic formulas, we can compute the dimensions of the spaces of differential forms appearing in the exact sequence. Hence we obtain the dimensions of the spaces of k-homogeneous Spin(9)-invariant valuations for k=0,1,...,16.
In the second part of this work, we construct one new element for a basis of one of these spaces. It is clear, that the k-th intrinsic volume is also Spin(9)-invariant. The last chapter of this work presents the construction of a new 2-homogeneous Spin(9)-invariant valuation. On a Riemannian manifold (M,g), we construct a valuation by integrating the curvature tensor over the disc bundle. We associate to this valuation on M a family of valuations on the tangent spaces. We show that these valuations are even and homogeneous of degree 2. Moreover, since the valuation on M is invariant under the action of the isometry group of M, the induced valuation on the tangent space in a point p in M is invariant under the action of the stabilisator of p for all p in M. In the special case where M is the octonionic projective plane, this construction yields an even, homogeneous of degree 2, Spin(9)-invariant valuation, whose Klain function is not constant, i.e. which is linearly independent of the second intrinsic volume.
In dieser Arbeit werden drei Themenkomplexe aus dem Bereich der Externspeicheralgorithmen näher beleuchtet: Approximationsalgorithmen, dynamische Algorithmen und Echtzeitanfragen. Das Thema Approximationsalgorithmen wird sowohl im Kapitel 3 als auch im Kapitel 5 behandelt.
In Kapitel 3 wird ein Algorithmus vorgestellt, welcher den Durchmesser eines Graphen heuristisch bestimmt. Im RAM- Modell ist eine modifizierte Breitensuche selbst ein günstiger und äußerst genauer Algorithmus. Dies ändert sich im Externspeicher. Dort ist die Hauptspeicher-Breitensuche durch die O(n + m) unstrukturierten Zugriffe auf den externen Speicher zu teuer. 2008 wurde von Meyer ein Verfahren zu effizienten Approximation des Graphdurchmessers im Externspeicher gezeigt, welches O(k · scan(n + m) + sort(n + m) + √(n·m/k·B)· log(n/k) + MST(n, m)) I/Os bei einem multiplikativen Approximationsfehler von O(√k · log (k)) benötigt. Die Implementierung, welche in dieser Arbeit vorgestellt wird, konnte in vielen praktischen Fällen die Anzahl an I/Os durch Rekursion auf O(k · scan(n + m) + sort(n + m) + MST(n, m)) I/Os reduzieren. Dabei wurden verschiedene Techniken untersucht, um die Auswahl der Startpunkte (Masterknoten) zum rekursiven Schrumpfen des Graphen so wählen zu können, dass der Fehler möglichst klein bleibt. Weiterhin wurde eine adaptive Regel eingeführt, um nur so viele Masterknoten zu wählen, dass der geschrumpfte Graph nach möglichst wenigen Rekursionsaufrufen in den Hauptspeicher passt. Es wirdgezeigt, dass die untere Schranke für den worst case-Fehler dabei auf Ω(k^{4/3−e}) mit hoher Wahrscheinlichkeit steigt. Die experimentelle Auswertung zeigt jedoch, dass in der Praxis häufig deutlich bessere Ergebnisse erzielt werden.
In Kapitel 4 wird ein Algorithmus vorgestellt, welcher, nach dem Einfügen einer neuen Kante in einen Graphen, den zugehörigen Baum der Breitensuche unter Verwendung von O(n · (n/B^{2/3} + sort(n) · log (B))) I/Os mit hoher Wahrscheinlichkeit aktualisiert. Dies ist für hinreichend große B schneller als die statische Neuberechnung. Zur Umsetzung des Algorithmus wurde eine neue deterministische Partitionsmethode entwickelt, bei der die Größe der Cluster balanciert und effizient veränderbar ist. Hierfür wird ein Dendrogramm des Graphen auf einer geeigneten Baumrepräsentation, wie beispielsweise Spannbaum, berechnet. Dadurch hat jeder Knoten ein Label, welches aufgrund seiner Lage innerhalb der Baumrepräsentation berechnet worden ist. Folglich kann mittels schneller Bit-Operationen das Label um niederwertige Stellen gekürzt werden, um Cluster der Größe µ = 2 i zu berechnen, wobei der Clusterdurchmesser auf µ beschränkt ist, was für die I/O-Komplexität gewährleistet sein muss, da der Trade-off aus MM_BFS zwischen Cluster- und Hotpoolgröße genutzt wird. In der experimentellen Auswertung wird gezeigt, dass die Performanz von dynamischer Breitensuche sowohl auf synthetischen als auch auf realen Daten oftmals schneller ist als eine statische Neuberechnung des Baums der Breitensuche. Selbst wenn dies nicht der Falls ist, so sind wir nur um kleine, konstante Faktoren langsamer als die statische Implementierung von MM_BFS.
Schließlich wird in Kapitel 5 ein Approximationsalgorithmus vorgestellt, welcher sowohl dynamische Komponenten beinhaltet als auch die Eigenschaft besitzt, Anfragen in Echtzeit zu beantworten. Um die Echtzeitfähigkeit zu erreichen, darf eine Anfrage nur O(1) I/Os hervorrufen. Im Szenario dieser Arbeit wurden Anfragen zu Distanzen zwischen zwei beliebigen Knoten u und v auf realen Graphdaten mittels eines Distanzorakels beantwortet. Es wird eine Implementierung sowohl für mechanische Festplatten als auch für SSDs vorgestellt, wobei kontinuierliche Anfragen im Onlineszenario von SSDs in Millisekunden gelöst werden können, während ein großer Block von Anfragen auf beiden Architekturen in Mikrosekunden pro Anfrage amortisiert gelöst werden kann.