Refine
Document Type
- Doctoral Thesis (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3) (remove)
Keywords
- Graphentheorie (3) (remove)
Institute
- Biowissenschaften (1)
- Informatik und Mathematik (1)
- Physik (1)
Das adaptive Immunsystem schützt den Menschen vor extra- wie auch intrakorporal auftretenden Pathogenen und Krebszellen. Die Funktionalität dieses Prozesses geht hierbei auf die Interaktion und Kooperation einer Vielzahl verschiedener Zelltypen des Körpers zurück und ist vorwiegend innerhalb der Lymphknoten lokalisiert. Ist auch nur ein Bestandteil dieses sensiblen Prozesses gestört, kann dies zu einem teilweisen oder vollständigen Verlust der immunologischen Fitness des Menschen führen. Daher war es das Ziel dieser Arbeit, solche Aberrationen des humanen Lymphknotengewebes umfassend digital-pathologisch zu detektieren und zu definieren.
Hierfür wurde zunächst eine digitale Gewebedatenbank etabliert. Diese basiert auf dem im Rahmen dieser Arbeit implementierten Content-Management-System Digital Tissue Management Suite. Weiterhin wurde die Software Feature analysis in tissue histomorphometry entwickelt, welche die Analyse von zweidimensionalen whole slide images ermöglicht. Hierbei werden Methoden aus dem Bereich Computer Vision und Graphentheorie eingesetzt, um morphologische und distributionale Eigenschaften der Zelltypen des Lymphknotens zu charakterisieren. Darüber hinaus enthält diese Software Plug-ins zur Visualisierung und statistischen Analyse der Daten.
Aufbauend auf der eigens implementierten, digitalen Infrastruktur, in Kombination mit der Software Imaris wurden zweidimensional und dreidimensional gescannte, reaktive und neoplastische Gewebeproben digital phänotypisiert. Hierbei konnten neue mechanische Barrieren zur Kompartimentalisierung der Keimzentren aufgeklärt werden. Weiterhin konnte der Erhalt des quantitativen Verhältnisses einzelner Zellpopulationen innerhalb der Keimzentren beschrieben werden. Ausgehend von den reaktiven Phänotypen des Lymphknotens, wurden pathophysiologische Aberrationen in verschiedenen lymphatischen Neoplasien untersucht. Hierbei konnte gezeigt werden, dass speziell die strukturelle Destruktion häufig mit einer morphologischen Veränderung der fibroblastischen Retikulumzellen einhergeht.
Neben strukturellen Veränderungen sind auch zytologische Veränderungen der Tumormikroumgebung zu verzeichnen. Eine besondere Rolle spielen hierbei sogenannte Tumor-assoziierte Makrophagen. Im Rahmen dieser Arbeit konnte gezeigt werden, dass speziell Makrophagen in der Tumormikroumgebung des diffus großzelligen B-Zell-Lymphoms und der chronisch lymphatischen Leukämie spezifische pathophysiologische Veränderungen aufzeigen. Auch konnte gezeigt werden, dass genetische Änderungen neoplastischer B-Zellen mit einer generellen Reduktion der CD20-Antigendichte einhergehen.
Zusammenfassend ermöglichten die Ergebnisse die Generierung eines umfassenden digital-pathologischen Profils des klassischen Hodgkin-Lymphoms. Hierbei konnten morphologische Veränderungen neoplastischer, CD30-positiver Hodgkin-Reed-Sternberg-Zellen validiert und beschrieben werden. Auch konnten pathologische Veränderungen des Konnektoms und der Tumormikroumgebung dieser Zellen parametrisiert und quantifiziert werden. Abschließend wurde unter Anwendung eines Random forest-Klassifikators die diagnostische Potenz digital-pathologischer Profile evaluiert und validiert.
In nature, society and technology many disordered systems exist, that show emergent behaviour, where the interactions of numerous microscopic agents result in macroscopic, systemic properties, that may not be present on the microscopic scale. Examples include phase transitions in magnetism and percolation, for example in porous unordered media, biological, and social systems. Also technological systems that are explicitly designed to function without central control instances, like their prime example the Internet, or virtual networks, like the World Wide Web, which is defined by the hyperlinks from one web page to another, exhibit emergent properties. The study of the common network characteristics found in previously seemingly unrelated fields of science and the urge to explain their emergence, form a scientific field in its own right, the science of complex networks. In this field, methodologies from physics, leading to simplification and generalization by abstraction, help to shift the focus from the implementation's details on the microscopic level to the macroscopic, coarse grained system level. By describing the macroscopic properties that emerge from microscopic interactions, statistical physics, in particular stochastic and computational methods, has proven to be a valuable tool in the investigation of such systems. The mathematical framework for the description of networks is graph theory, in hindsight founded by Euler in 1736 and an active area of research since then. In recent years, applied graph theory flourished through the advent of large scale data sets, made accessible by the use of computers. A paradigm for microscopic interactions among entities that locally optimize their behaviour to increase their own benefit is game theory, the mathematical framework of decision finding. With first applications in economics e.g. Neumann (1944), game theory is an approved field of mathematics. However, game theoretic behaviour is also found in natural systems, e.g. populations of the bacterium Escherichia coli, as described by Kerr (2002). In the present work, a combination of graph theory and game theory is used to model the interactions of selfish agents that form networks. Following brief introductions to graph theory and game theory, the present work approaches the interplay of local self-organizing rules with network properties and topology from three perspectives. To investigate the dynamics of topology reshaping, coupling of the so called iterated prisoners' dilemma (IPD) to the network structure is proposed and studied in Chapter 4. In dependence of a free parameter in the payoff matrix, the reorganization dynamics result in various emergent network structures. The resulting topologies exhibit an increase in performance, measured by a variance of closeness, of a factor 1.2 to 1.9, depending in the chosen free parameter. Presented in Chapter 5, the second approach puts the focus on a static network structure and studies the cooperativity of the system, measured by the fixation probability. Heterogeneous strategies to distribute incentives for cooperation among the players are proposed. These strategies allow to enhance the cooperative behaviour, while requiring fewer total investments. Putting the emphasis on communication networks in Chapters 6 and 7, the third approach investigates the use of routing metrics to increase the performance of data packet transport networks. Algorithms for the iterative determination of such metrics are demonstrated and investigated. The most successful of these algorithms, the hybrid metric, is able to increase the throughput capacity of a network by a factor of 7. During the investigation of the iterative weight assignments a simple, static weight assignment, the so called logKiKj metric, is found. In contrast to the algorithmic metrics, it results in vanishing computational costs, yet it is able to increase the performance by a factor of 5.
We investigate the utility of modern kernel-based machine learning methods for ligand-based virtual screening. In particular, we introduce a new graph kernel based on iterative graph similarity and optimal assignments, apply kernel principle component analysis to projection error-based novelty detection, and discover a new selective agonist of the peroxisome proliferator-activated receptor gamma using Gaussian process regression. Virtual screening, the computational ranking of compounds with respect to a predicted property, is a cheminformatics problem relevant to the hit generation phase of drug development. Its ligand-based variant relies on the similarity principle, which states that (structurally) similar compounds tend to have similar properties. We describe the kernel-based machine learning approach to ligand-based virtual screening; in this, we stress the role of molecular representations, including the (dis)similarity measures defined on them, investigate effects in high-dimensional chemical descriptor spaces and their consequences for similarity-based approaches, review literature recommendations on retrospective virtual screening, and present an example workflow. Graph kernels are formal similarity measures that are defined directly on graphs, such as the annotated molecular structure graph, and correspond to inner products. We review graph kernels, in particular those based on random walks, subgraphs, and optimal vertex assignments. Combining the latter with an iterative graph similarity scheme, we develop the iterative similarity optimal assignment graph kernel, give an iterative algorithm for its computation, prove convergence of the algorithm and the uniqueness of the solution, and provide an upper bound on the number of iterations necessary to achieve a desired precision. In a retrospective virtual screening study, our kernel consistently improved performance over chemical descriptors as well as other optimal assignment graph kernels. Chemical data sets often lie on manifolds of lower dimensionality than the embedding chemical descriptor space. Dimensionality reduction methods try to identify these manifolds, effectively providing descriptive models of the data. For spectral methods based on kernel principle component analysis, the projection error is a quantitative measure of how well new samples are described by such models. This can be used for the identification of compounds structurally dissimilar to the training samples, leading to projection error-based novelty detection for virtual screening using only positive samples. We provide proof of principle by using principle component analysis to learn the concept of fatty acids. The peroxisome proliferator-activated receptor (PPAR) is a nuclear transcription factor that regulates lipid and glucose metabolism, playing a crucial role in the development of type 2 diabetes and dyslipidemia. We establish a Gaussian process regression model for PPAR gamma agonists using a combination of chemical descriptors and the iterative similarity optimal assignment kernel via multiple kernel learning. Screening of a vendor library and subsequent testing of 15 selected compounds in a cell-based transactivation assay resulted in 4 active compounds. One compound, a natural product with cyclobutane scaffold, is a full selective PPAR gamma agonist (EC50 = 10 +/- 0.2 muM, inactive on PPAR alpha and PPAR beta/delta at 10 muM). The study delivered a novel PPAR gamma agonist, de-orphanized a natural bioactive product, and, hints at the natural product origins of pharmacophore patterns in synthetic ligands.