Refine
Year of publication
Document Type
- Doctoral Thesis (85) (remove)
Language
- English (85) (remove)
Has Fulltext
- yes (85)
Is part of the Bibliography
- no (85)
Keywords
- ALICE (3)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- FPGA (2)
- Machine Learning (2)
- Tracking (2)
- ALICE experiment (1)
- Ageing (1)
- Algebraic number theory (1)
Institute
- Informatik und Mathematik (85) (remove)
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
Antimicrobial resistance became a serious threat to the worldwide public health in this century. A better understanding of the mechanisms, by which bacteria infect host cells and how the host counteracts against the invading pathogens, is an important subject of current research. Intracellular bacteria of the Salmonella genus have been frequently used as a model system for bacterial infections. Salmonella are ingested by contaminated food or water and cause gastroenteritis and typhoid fever in animals and humans. Once inside the gastrointestinal tract, Salmonella can invade intestinal epithelial cells. The host cell can fight against intracellular pathogens by a process called xenophagy. For complex systems, such as processes involved in the bacterial infection of cells, computational systems biology provides approaches to describe mathematically how these intertwined mechanisms in the cell function. Computational systems biology allows the analysis of biological systems at different levels of abstraction. Functional dependencies as well as dynamic behavior can be studied. In this thesis, we used the Petri net formalism to gain a better insight into bacterial infections and host defense mechanisms and to predict cellular behavior that can be tested experimentally. We also focused on the development of new computational methods.
In this work, the first realization of a mathematical model of the xenophagic capturing of Salmonella enterica serovar Typhimurium in epithelial cells was developed. The mathematical model expressed in the Petri net formalism was constructed in an iterative way of modeling and analyses. For the model verification, we analyzed the Petri net, including a computational performance of knockout experiments named in silico knockouts, which was established in this work. The in silico knockouts of the proposed Petri net are consistent with the published experimental perturbation studies and, thus, ensures the biological credibility of the Petri net. In silico knockouts that have not been experimentally investigated yet provide hypotheses for future investigations of the pathway.
To study the dynamic behavior of an epithelial cell infected with Salmonella enterica serovar Typhimurium, a stochastic Petri net was constructed. In experimental research, a decision like "Which incubation time is needed to infect half of the epithelial cells with Salmonella?" is based on experience or practicability. A mathematical model can help to answer these questions and improve experimental design. The stochastic Petri net models the cell at different stages of the Salmonella infection. We parameterized the model by a set of experimental data derived from different literature sources. The kinetic parameters of the stochastic Petri net determine the time evolution of the bacterial infection of a cell. The model captures the stochastic variation and heterogeneity of the intracellular Salmonella population of a single cell over time. The stochastic Petri net is a valuable tool to examine the dynamics of Salmonella infections in epithelial cells and generate valuable information for experimental design.
In the last part of this thesis, a novel theoretical method was introduced to perform knockout experiments in silico. The new concept of in silico knockouts is based on the computation of signal flows at steady state and allows the determination of knockout behavior that is comparable to experimental perturbation behavior. In this context, we established the concept of Manatee invariants and demonstrated the suitability of their application for in silico knockouts by reflecting biological dependencies from the signal initiation to the response. As a proof of principle, we applied the proposed concept of in silico knockouts to the Petri net of the xenophagic recognition of Salmonella. To enable the application of in silico knockouts for the scientific community, we implemented the novel method in the software isiKnock. isiKnock allows the automatized performance and visualization of in silico knockouts in signaling pathways expressed in the Petri net formalism. In conclusion, the knockout analysis provides a valuable method to verify computational models of signaling pathways, to detect inconsistencies in the current knowledge of a pathway, and to predict unknown pathway behavior.
In summary, the main contributions of this thesis are the Petri net of the xenophagic capturing of Salmonella enterica serovar Typhimurium in epithelial cells to study the knockout behavior and the stochastic Petri net of an epithelial cell infected with Salmonella enterica serovar Typhimurium to analyze the infection dynamics. Moreover, we established a new method for in silico knockouts, including the concept of Manatee invariants and the software isiKnock. The results of these studies are useful to a better understanding of bacterial infections and provide valuable model analysis techniques for the field of computational systems biology.
In the first part of this thesis, we introduce the concept of prospective strict no-arbitrage for discrete-time financial market models with proportional transaction. The prospective strict no-arbitrage condition, which is a variant of strict no-arbitrage, is slightly weaker than the robust no-arbitrage condition. It still implies that the set of portfolios attainable from zero initial endowment is closed in probability. Consequently, prospective strict no-arbitrage implies the existence of consistent prices, which may lie on the boundary of the bid-ask spread. A weak version of prospective strict no-arbitrage turns out to be equivalent to the existence of a consistent price system.
In continuous-time financial market models with proportional transaction costs, efficient friction, i.e., nonvanishing transaction costs, is a standing assumption. Together with robust no free lunch with vanishing risk, it rules out strategies of infinite variation which usually appear in frictionless financial markets. In the second part of this thesis, we show how models with and without transaction costs can be unified. The bid and the ask price of a risky asset are given by cadlag processes which are locally bounded from below and may coincide at some points. In a first step, we show that if the bid-ask model satisfies no unbounded profit with bounded risk for simple long-only strategies, then there exists a semimartingale lying between the bid and the ask price process.
In a second step, under the additional assumption that the zeros of the bid-ask spread are either starting points of an excursion away from zero or inner points from the right, we show that for every bounded predictable strategy specifying the amount of risky assets, the semimartingale can be used to construct the corresponding self-financing risk-free position in a consistent way. Finally, the set of most general strategies is introduced, which also provides a new view on the frictionless case.
Machine Learning (ML) is so pervasive in our todays life that we don't even realise that, more often than expected, we are using systems based on it. It is also evolving faster than ever before. When deploying ML systems that make decisions on their own, we need to think about their ignorance of our uncertain world. The uncertainty might arise due to scarcity of the data, the bias of the data or even a mismatch between the real world and the ML-model. Given all these uncertainties, we need to think about how to build systems that are not totally ignorant thereof. Bayesian ML can to some extent deal with these problems. The specification of the model using probabilities provides a convenient way to quantify uncertainties, which can then be included in the decision making process.
In this thesis, we introduce the Bayesian ansatz to modeling and apply Bayesian ML models in finance and economics. Especially, we will dig deeper into Gaussian processes (GP) and Gaussian process latent variable model (GPLVM). Applied to the returns of several assets, GPLVM provides the covariance structure and also a latent space embedding thereof. Several financial applications can be build upon the output of the GPLVM. To demonstrate this, we build an automated asset allocation system, a predictor for missing asset prices and identify other structure in financial data.
It turns out that the GPLVM exhibits a rotational symmetry in the latent space, which makes it harder to fit. Our second publication reports, how to deal with that symmetry. We propose another parameterization of the model using Householder transformations, by which the symmetry is broken. Bayesian models are changed by reparameterization, if the prior is not changed accordingly. We provide the correct prior distribution of the new parameters, such that the model, i.e. the data density, is not changed under the reparameterization. After applying the reparametrization on Bayesian PCA, we show that the symmetry of nonlinear models can also be broken in the same way.
In our last project, we propose a new method for matching quantile observations, which uses order statistics. The use of order statistics as the likelihood, instead of a Gaussian likelihood, has several advantages. We compare these two models and highlight their advantages and disadvantages. To demonstrate our method, we fit quantiled salary data of several European countries. Given several candidate models for the fit, our method also provides a metric to choose the best option.
We hope that this thesis illustrates some benefits of Bayesian modeling (especially Gaussian processes) in finance and economics and its usage when uncertainties are to be quantified.
Die digitale Pathologie ist ein neues, aber stetig wachsendes, Feld in der Medizin. Die kontinuierliche Entwicklung von verbesserten digitalen Scannern erlaubt heute das Abscannen von kompletten Gewebeschnitten und Whole Slide Images gewinnen an Bedeutung. Ziel dieser Arbeit ist die Methodenentwicklung zur Analyse von Whole Slide Images des klassischen Hodgkin Lymphoms. Das Hodgkin-Lymphom, oder Morbus Hodgkin, ist eine Tumorerkrankung des Lymphsystems, bei der die monoklonalen Tumorzellen in der Regel von B-Lymphozyten im Vorläuferstadium abstammen.
Etwas mehr als 9.000 Hodgkin-Lymphom-Fälle werden jährlich in den USA diagnostiziert. Zwar ist die 5-Jahre-Überlebensrate für Hodgkin-Lymphome mit 85,3 % vergleichsweise hoch, dennoch werden etwa 1.100 Todesfälle pro Jahr in den USA registriert. Auf mikroskopischer Ebene sind die Hodgkin-Reed-Sternberg Zellen (HRS-Zellen) typisch für das klassische Hodgkin Lymphom. HRS-Zellen haben einen oder mehrere Zellkerne, die stark vergrößert sind und eine grobe Chromatinstruktur aufweisen. Immunhistologisch gibt es für HRS-Zellen charakterisierende Marker, so sind HRS-Zellen positiv für den Aktivierungsmarker CD30.
Neben der konventionellen Mikroskopie, ermöglichen Scanner das Digitalisieren von ganzen Objektträgern (Whole Slide Image). Whole Slide Images werden bisher wenig in der Routinediagnostik eingesetzt. Ein großer Vorteil von digitalisierten Gewebeschnitten bietet sich bei der computergestützten Analyse. Automatisierte Bildanalyseverfahren wie Zellerkennung können Pathologen bei der Diagnose unterstützen, indem sie umfassende Statistiken zur Anzahl und Verteilung von immungefärbten Zellen bereitstellen.
Die untersuchten immunohistologischen Bilder wurden vom Dr. Senckenbergisches Institut für Pathologie des Universitätsklinikums Frankfurt bereit gestellt. Die betrachteten Gewebeschnitte sind gegen CD30 immungefärbt, einem Membranrezeptor, welcher in HRS-Zellen und aktivierten Lymphozyten exprimiert wird. Die Gewebeschnitte wurden mit einem Aperio ScanScope slide scanner digitalisiert und liegen mit einer hohen Auflösung von 0,25 μm pro Pixel vor. Bei den vorliegenden Gewebeschnittgrößen ergeben sich Bilder mit bis zu 90.000 x 90.000 Pixeln.
Der untersuchte Bilddatensatz umfasst 35 Bilder von Lymphknotengewebeschnitten der drei Krankheitsbilder: Gemischtzelliges klassisches Hodgkinlymphom, noduläres klassisches Hodgkinlymphom und Lymphadenitis. Die Bildverarbeitungspipeline wurden teils neu implementiert, teils von etablierten Bilderkennungssoftware und -bibliotheken wie CellProfiler und Java Advanced Imaging verwendet. CD30-positive Zellobjekte werden in den Gewebeschnitten automatisiert erkannt und neben der globalen Position im Whole Slide Image weitere Morphologiedeskriptoren berechnet, wie Fläche, Feret-Durchmesser, Exzentrität und Solidität. Die Zellerkennung zeigt mit 84 % eine hohe Präzision und mit 95 % eine sehr gute Sensitivität.
Es konnte gezeigt werden, dass in Lymphadenitisfällen im Schnitt deutlich weniger CD30- positive Zellen präsent sind als in klassisches Hodgkinlymphom. Während hier im Schnitt nur rund 3.000 Zellen gefunden wurden, lag der Durchschnitt für das Mischtyp klassisches Hodgkinlymphom bei rund 19.000 CD30 positiven Zellen. Während die CD30-positiven Zellen in Lymphadenitisfällen relativ gleichmäßig verteilt sind, bilden diese in klassischen Hodgkinlymphom-Fällen Zellcluster höherer Dichte.
Die berechneten Morphologiedeskriptoren bieten die Möglichkeit die Gewebeschnitte und den Krankheitsverlauf näher zu beschreiben. Zudem sind bisher Größe und Erscheinungsbild der HRS-Zellen hauptsächlich anhand manuell ausgewählter Zellen bestimmt worden. Ein Maß für die Ausdehnung der Zellen ist der maximale Feret-Durchmesser. Bei CD30-Zellen im klassischen Hodgkinlymphom liegt dieser im Durchschnitt bei 20 μm und ist somit deutlich größer als die durchschnittlich gemessenen 15 μm in Lymphadenitis.
Es wurde ein graphentheoretischer Ansatz gewählt, um die CD30 positiven Zellen und ihre räumliche Nachbarschaft zu modellieren. In CD30-Zellgraphen von klassischen Hodgkinlymphom-Gewebeschnitten ist der durchschnittliche Knotengrad gegenüber den von Lymphadenitis-Bildern stark erhöht. Der Vergleich mit Zufallsgraphen zeigt, dass die beobachteten Knotengradverteilungen nicht für eine zufällige Verteilung der Zellen im Gewebeschnitt sprechen. Eigenschaften und Verteilung von Communities in CD30-Zellgraphen können hinzugenommen werden, um klassisches Hodgkinlymphom Gewebeschnitte näher zu charakterisieren.
Diese Arbeit zeigt, dass die Auswertung von Whole Slide Image unterstützend zur Verbesserung der Diagnose möglich ist. Die mehr als 400.000 automatisch erkannten CD30-positiven Zellobjekte wurden morphologisch beschrieben, und zusammen mit ihrer Position im Gewebeschnitt ist die Betrachtung wichtiger Eigenschaften des klassischen Hodgkinlymphoms realisierbar. Zellgraphen können durch weitere Zelltypen erweitert werden und auf andere Krankheitsbilder angewendet werden.
Blockchains in public administration : a RADIUS on blockchain framework for public administration
(2023)
The emergence of blockchain technology has generated a great deal of attention, as reflected in numerous scientific and journalistic articles. However, the implementation of blockchain for public administrations in Germany has encountered a setback owing to unsuccessful initiatives. Initial enthusiasm was followed by disillusionment. Nevertheless, technology continues to evolve. This paper examines whether the use of a blockchain can still optimize the processes of public administrations. Not only the failed projects are analysed, but also more current applications of the technology and their potential relevance for the administration, especially in the state of Hesse.
To answer if blockchains are promising to administrations, a Design Science Research (DSR) research approach is chosen. The DSR method is a research-based approach that aims to create new and innovative solutions to real-world problems through the development and evaluation of artefacts such as models, methods, or prototypes. For this work, the implementation of a framework to realize an Authentication, Authorization, and Accounting (AAA) system on the blockchain was identified as profitable. The framework aims to implement the aforementioned AAA tasks using a blockchain. The Remote Authentication Dial-In User Service (RADIUS) protocol has been identified as a potential protocol of the AAA system. The goal is to create a way to implement the system either entirely on a blockchain or as a hybrid system. Various blockchain technologies will be considered. Suitable for development, the framework AAA-me is named.
The development of AAA-me has shown that the desired framework for implementing RADIUS on the blockchain is possible in various degrees of implementation. Previous work mostly relied on full development. Additionally, it has been shown that AAA-me can be used to perform hybrid integration at different implementation levels. This makes AAA-me stand out from the few hybrid previous approaches. Furthermore, AAA-me was investigated in different laboratory environments. This was to determine the expected resilience against Single Point of Failure (SPOF). The results of the lab investigation indicated that a RADIUS system on top of a blockchain can provide benefits in terms of security and performance. In the lab environment, times were measured within which a series of authorization requests were processed. In addition, it was illustrated how a RADIUS system implemented using blockchain can protect itself against Man-in-the-Middle (MITM) attacks.
Finally, in collaboration with the Hessian Central Office for Data Processing (German: Hessische Zentrale für Datenverarbeitung) (HZD), another test lab demonstrated how a RADIUS system on the blockchain can integrate with the existing IT systems of the German state of Hesse. Based on these findings, this work reevaluated the applicability of blockchain technology for public administration processes.
The work has thus shown that the use of a blockchain can still be purposeful. However, it has also been shown that an implementation can bring many problems with it. The small number of blockchain developers and engineers also poses the risk of finding people to develop and maintain a system. In addition, one faces the problem of determining an architecture now that will be applied to many projects in the future. However, each project can, in turn, have an impact on the choice of architecture. Once one has solved this problem and a blockchain infrastructure is available, it can be established quickly and be more SPOF resistant, for example, for Public Key Infrastructure (PKI) systems.
AAA-me was only applied in lab and test environments. As a result, no real data ran over its own infrastructure. This allowed the necessary flexibility for development. However, system-related properties could appear in real situations that are not detectable here in this way. Furthermore, the initial stage of AAA-me’s development is still in its infancy. Many manual adjustments need to be made in order for this to integrate with an existing RADIUS system. Also, no system security effort in and of itself has been carried out in the lab environments. Thus, vulnerabilities can quickly open up on web servers due to misconfigurations and missing updates. For the above reasons, productive use should be discouraged unless major developments are carried out.
In the first part of the thesis, we show that the payment flow of a linear tax on trading gains from a security with a semimartingale price process can be constructed for all càglàd and adapted trading strategies. It is characterized as the unique continuous extension of the tax payments for elementary strategies w.r.t. the convergence "uniformly in probability". In this framework, we prove that under quite mild assumptions dividend payoffs have almost surely a negative effect on investor’s after-tax wealth if the riskless interest rate is always positive. In addition, we give an example for tax-efficient strategies for which the tax payment flow can be computed explicitly.
In the second part of the thesis, we investigate the impact of capital gains taxes on optimal investment decisions in a quite simple model. Namely, we consider a risk neutral investor who owns one risky stock from which she assumes that it has a lower expected return than the riskless bank account and determine the optimal stopping time at which she sells the stock to invest the proceeds in the bank account up to the maturity date. In the case of linear taxes and a positive riskless interest rate, the problem is nontrivial because at the selling time the investor has to realize book profits which triggers tax payments. We derive a boundary that is continuous and increasing in time, and decreasing in the volatility of the stock such that the investor sells the stock at the first time its price is smaller or equal to this boundary.