Refine
Year of publication
Document Type
- Master's Thesis (42) (remove)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42) (remove)
Keywords
- WaterGAP (2)
- global water model (2)
- (n (1)
- AI Safety (1)
- ALICE (1)
- Activism (1)
- App ecosystem (1)
- Autonomous Driving (1)
- Bayesian Inference (1)
- Beryllium-7 (1)
Institute
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
Goal-Conditioned Reinforcement Learning (GCRL) is a popular framework for training agents to solve multiple tasks in a single environment. It is cru- cial to train an agent on a diverse set of goals to ensure that it can learn to generalize to unseen downstream goals. Therefore, current algorithms try to learn to reach goals while simultaneously exploring the environment for new ones (Aubret et al., 2021; Mendonca et al., 2021). This creates a form of the prominent exploration-exploitation dilemma. To relieve the pres- sure of a single agent having to optimize for two competing objectives at once, this thesis proposes the novel algorithm family Goal-Conditioned Re- inforcement Learning with Prior Intrinsic Exploration (GC-π), which sep- arates exploration and goal learning into distinct phases. In the first ex- ploration phase, an intrinsically motivated agent explores the environment and collects a rich dataset of states and actions. This dataset is then used to learn a representation space, which acts as the distance metric for the goal- conditioned reward signal. In the final phase, a goal-conditioned policy is trained with the help of the representation space, and its training goals are randomly sampled from the dataset collected during the exploration phase. Multiple variations of these three phases have been extensively evaluated in the classic AntMaze MuJoCo environment (Nachum et al., 2018). The fi- nal results show that the proposed algorithms are able to fully explore the environment and solve all downstream goals while using every dimension of the state space for the goal space. This makes the approach more flexible compared to previous GCRL work, which only ever uses a small subset of the dimensions for the goals (S. Li et al., 2021a; Pong et al., 2020).
WaterGAP (Water - Global Assessment and Prognosis) is a tool for modeling global water use and water availability. It participates among other models in the ISIMIP initiative (The Inter-Sectoral Impact Model Intercomparison Project). As part of this initiative, the water temperature should be calculated by participating hydrological models because it plays a vital role in many chemical, physical and biological processes. Therefore, the subject of this master thesis is to implement the physically based surface water temperature computation after VAN BEEK ET AL. (2012) and WANDERS ET AL. (2019) into WaterGAP and compare the results to the statistical regression approach by PUNZET ET AL. (2012). The computation is validated with observed water temperature data obtained from the GEMStat water quality database. The results are good for arctic and temperate latitudes. Surface water temperatures for tropical rivers are overestimated, most likely due to the overestimation of precipitation temperatures, incoming radiation and groundwater temperatures. The comparison with the regression model by PUNZET ET AL. (2012) shows matching results. The regression model even matches with WaterGAP results for most of the simulations of the future under climate change conditions, where the regression model should stop working due to changing environmental parameters. Several assumptions had to be made in order to implement the water temperature calculation in Water-GAP. These include, e.g., discharge temperatures for power plant cooling water, precipitation and surface runoff temperatures. For model improvements, perhaps three different values for the different regions of the world should be used to cool down the precipitation and surface runoff. The model could also be improved by refining the ice formation calculation, especially for the conditions when the ice melts, breaks up and is transported downstream. Furthermore, the feedback to the river channel roughness could be implemented if ice has formed. The WaterGAP model upgraded with the water temperature calculation will help the ISIMIP initiative in the future.
The reanalysis products and derived products, ERA5 (Copernicus Climate Change Service, 2018) and W5E5 (WATCH Forcing Data (WFD) methodology applied to ERA5) (LANGE ET AL., 2021) have been recently published initiating a new phase of scientific research utilizing these datasets. ERA5 and W5E5 offer the possibility to reduce insecurities in model results through their improved quality compared to previous climate reanalyses (CUCCHI ET AL., 2020). The suitability of either climate forcing as input for the hydrological model WaterGAP and the influence of the models specific calibration routine has been evaluated with four model experiments. The model was validated by analysing the models ability to produce reasonable values for global water balance components and to reproduce observed discharge in 1427 basins as well as total water storage anomalies in 143 basins using well established efficiency metrics. Bias correction of W5E5 was found to lead to more global realistic mean precipitation and consequently discharge and AET values. In an uncalibrated model setup ERA5 results in better performances across all efficiency metrics. Model results produced with W5E5 as climate input were strongly improved through calibration ultimately leading to the best performances out of all four model experiments. However, model performances considerably improved through calibration with both climate forcings hence calibration was found to have the strongest effect on model performance. Furthermore, spatial differences in performance of either forcing were identified. Snow-dominated regions show an overall better performance with ERA5, while wetter and warmer regions are better represented with W5E5. Finally, it can be concluded that W5E5 should be preferred as climate input for impact modelling; however, depending on the spatial scale and region ERA5 should at least be considered, in particular for snow-dominated regions.
We give theorems about asymptotic normality of general additive functionals on patricia tries, derived from results on tries. These theorems are applied to show asymptotic normality of the distribution of random fringe trees in patricia tries. Formulas for asymptotic mean and variance are given. The proportion of fringe trees with 𝑘 keys is asymptotically, ignoring oscillations, given by (1−𝜌(𝑘))/(𝐻 +𝐽)𝑘(𝑘−1) with the source entropy 𝐻, an entropy-like constant 𝐽, that is 𝐻 in the binary case, and an exponentially decreasing function 𝜌(𝑘). Another application gives asymptotic normality of the independence number and the number of 𝑘-protected nodes.
AI-based computer vision systems play a crucial role in the environment perception for autonomous driving. Although the development of self-driving systems has been pursued for multiple decades, it is only recently that breakthroughs in Deep Neural Networks (DNNs) have led to their widespread application in perception pipelines, which are getting more and more sophisticated. However, with this rising trend comes the need for a systematic safety analysis to evaluate the DNN's behavior in difficult scenarios as well as to identify the various factors that cause misbehavior in such systems. This work aims to deliver a crucial contribution to the lacking literature on the systematic analysis of Performance Limiting Factors (PLFs) for DNNs by investigating the task of pedestrian detection in urban traffic from a monocular camera mounted on an autonomous vehicle. To investigate the common factors that lead to DNN misbehavior, six commonly used state-of-the-art object detection architectures and three detection tasks are studied using a new large-scale synthetic dataset and a smaller real-world dataset for pedestrian detection. The systematic analysis includes 17 factors from the literature and four novel factors that are introduced as part of this work. Each of the 21 factors is assessed based on its influence on the detection performance and whether it can be considered a Performance Limiting Factor (PLF). In order to support the evaluation of the detection performance, a novel and task-oriented Pedestrian Detection Safety Metric (PDSM) is introduced, which is specifically designed to aid in the identification of individual factors that contribute to DNN failure. This work further introduces a training approach for F1-Score maximization whose purpose is to ensure that the DNNs are assessed at their highest performance. Moreover, a new occlusion estimation model is introduced to replace the missing pedestrian occlusion annotations in the real-world dataset. Based on a qualitative analysis of the correlation graphs that visualize the correlation between the PLFs and the detection performance, this study identified 16 of the initial 21 factors as being PLFs for DNNs out of which the entropy, the occlusion ratio, the boundary edge strength, and the bounding box aspect ratio turned out to be most severely affecting the detection performance. The findings of this study highlight some of the most serious shortcomings of current DNNs and pave the way for future research to address these issues.
Statistical shape models learn to capture the most characteristic geometric variations of anatomical structures given samples from their population. Accordingly, shape models have become an essential tool for many medical applications and are used in, for example, shape generation, reconstruction, and classification tasks. However, established statistical shape models require precomputed dense correspondence between shapes, often lack robustness, and ignore the global surface topology. This thesis presents a novel neural flow-based shape model that does not require any precomputed correspondence. The proposed model relies on continuous flows of a neural ordinary differential equation to model shapes as deformations of a template. To increase the expressivity of the neural flow and disentangle global, low-frequency deformations from the generation of local, high- frequency details, we propose to apply a hierarchy of flows. We evaluate the performance of our model on two anatomical structures, liver, and distal femur. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior, as indicated by its generalization ability and specificity. More so, we demonstrate the effectiveness of our shape model on shape reconstruction tasks and find anatomically plausible solutions. Finally, we assess the quality of the emerging shape representation in an unsupervised setting and discriminate healthy from pathological shapes.
Electron identification with a likelihood method and measurements of di-electrons for the CBM-TRD
(2017)
In this work a likelihood method has been implemented and investigated as particle identification algorithm for the CBM-TRD.
The creation of the probability distributions for the likelihood method via V0-topologies seems to be feasible and the purity of the obtained samples is sufficient for the usage in the likelihood method.
The comparison between the ANN and the likelihood method shows no differences in the identification performance. The pion suppression factor reaches the same values for the same electron identification efficiencies and the yields of the resulting di-lepton signals are comparable. The signal-to-background ratios for both methods have the same values and show a value of about 10−2 in the invariant mass range of minv = 1.5 - 2.5 GeV/c2, which is expected to be sufficient to provide access to the thermal in-medium and QGP radiation.
The investigation of a detector system without a TRD shows no pion suppression for a momentum above p = 6 GeV/c. Therefore, the background contributions increase drastically and the signal-to-background ratio decreases at all invariant masses, but especially in the invariant mass range of minv = 1.5 - 2.5 GeV/c2.
The background contributions in the invariant mass range of minv = 1.5 - 2.5 GeV/c 2 are also influenced by the selected electron identification efficiency of the TRD, which significantly shifts the fraction of the eπ contributions relative to the total number of pairs.
Anisotropic collective flow of protons resulting from non-central heavy ion collisions is a unique hadronic observable providing information about the early stage of the nuclear collision. The analysis of collective flow in the energy regime between 1-2 AGeV enables the study of the phase diagram of hadronic matter at a high baryochemical potential µb, as well as the analysis of the equation of state at densities up to the threefold of the ground state density ρ0.
The algorithms of the standard event plane method and the scalar product method are used to analyse directed and elliptic flow of protons in a centrality range of 0-40 % most central events.
Prior to the analysis of experimental data, the respective influence of the reconstruction procedure on the algorithms is examined using Monte Carlo simulations based on the Ultra relativistic Quantum Molecular Dynamics (UrQMD) model.
Subsequently, experimental data measured in April 2012 with the High Acceptance DiElectron Spectrometer (HADES) is analysed using both methods. About 7.3 · 109 Au+Au events at a kinetic beam energy of 1.23 AGeV, equivalent to a centre of mass energy of √sNN = 2.42 GeV were recorded. A multi-differential analysis is feasible as the HADES detector provides a good transverse momentum and rapidity coverage.
Both algorithms result in identical values for directed and elliptic flow across all centrality classes within the observable phase space of protons. The calculated integrated value of v2 at mid rapidity is in good agreement with world data.
In April and May 2012 data on Au+Au collisions at beam energies of Ekin = 1.23A GeV were collected with the High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung facility in Darmstadt, Germany. In this thesis, the production of deuterons in this collision system is investigated.
A total number of 2.1 × 109 Au+Au events is selected, containing the most central 0-40% of events. After particle identification, based on a mass determination via time-of-flight and momentum and on a measurement of the energy loss, the transverse mass spectra of the deuteron candidates are extracted for various rapidities and subsequently corrected for acceptance and efficiency.
The inverse slope parameter of a Boltzmann fit applied to the transverse mass spectra at midrapidity, which is referred to as the effective temperature, is extracted. For a static thermal source, this parameter corresponds to the kinetic freeze-out temperature Tkin and is therefore expected to be smaller or equal to the chemical freeze-out temperature Tchem. The extracted effective temperature of Tef f = (190 ± 10) MeV however exceeds the chemical freeze-out temperature that was obtained by a statistical model fit to different particle yields. The effective temperatures of various particle species, obtained in previous analyses, suggest a systematic rise with increasing particle mass, which is confirmed by the deuteron results.
An explanation can be the influence of a collective expansion with a radial expansion velocity βr. By fitting a Siemens-Rasmussen function to the transverse mass spectra, the global temperature of T = (100 ± 8) MeV and radial expansion velocity βr = 0.37 ± 0.01 are obtained. This temperature is still very high and only takes into account the production of deuteron nuclei.
The simultaneous fit of a blast-wave function to the transverse mass spectra of deuterons and other particles, as obtained by previous analyses, considers a velocity profile for the radial expansion velocity and takes into account the production of various particle species. The resulting global temperature Tkin = (68 ± 1) MeV and average transverse expansion velocity hβri = 0.341 ± 0.003 are within the expected range for the collision energy.
The Siemens-Rasmussen fits are also used to extrapolate the transverse mass spectra into unmeasured regions, to integrate them and obtain a rapidity-dependent count rate. This count rate exhibits a thermal shape for central events and shows increasing spectator contributions for more peripheral events.
The invariant yield spectra of the deuterons are compared to those of protons, as obtained by a previous analysis, in the context of a nucleon coalescence model. The hereby extracted nucleon coalescence factor B2 = (4.6 ± 0.1) × 10−3 agrees with the expected result for the beam energy that was studied.
When performing transfer learning in Computer Vision, normally a pretrained model (source model) that is trained on a specific task and a large dataset like ImageNet is used. The learned representation of that source model is then used to perform a transfer to a target task. Performing transfer learning in this way had a great impact on Computer Vision, because it worked seamlessly, especially on tasks that are related to each other. Current research topics have investigated the relationship between different tasks and their impact on transfer learning by developing similarity methods. These similarity methods have in common, to do transfer learning without actually doing transfer learning in the first place but rather by predicting transfer learning rankings so that the best possible source model can be selected from a range of different source models. However, these methods have focused only on singlesource transfers and have not paid attention to multi-source transfers. Multi-source transfers promise even better results than single-source transfers as they combine information from multiple source tasks, all of which are useful to the target task. We fill this gap and propose a many-to-one task similarity method called MOTS that predicts both, single-source transfers and multi-source transfers to a specific target task. We do that by using linear regression and the source representations of the source models to predict the target representation. We show that we achieve at least results on par with related state-of-the-art methods when only focusing on singlesource transfers using the Pascal VOC and Taskonomy benchmark. We show that we even outperform all of them when using single and multi-source transfers together (0.9 vs. 0.8) on the Taskonomy benchmark. We additionally investigate the performance of MOTS in conjunction with a multi-task learning architecture. The task-decoder heads of a multi-task learning architecture are used in different variations to do multi-source transfers since it promises efficiency over multiple singletask architectures and incurs less computational cost. Results show that our proposed method accurately predicts transfer learning rankings on the NYUD dataset and even shows the best transfer learning results always being achieved when using more than one source task. Additionally, it is further examined that even just using one task-decoder head from the multi-task learning architecture promises better transfer learning results, than using a single-task architecture for the same task, which is due to the shared information from different tasks in the multi-task learning architecture in previous layers. Since the MOTS rankings for selecting the MTI-Net task-decoder head with the highest transfer learning performance were very accurate for the NYUD but not satisfying for the Pascal VOC dataset, further experiments need to varify the generalizability of MOTS rankings for the selection of the optimal task-decoder head from a multi-task architecture.
Autonomous steering of an electric bicycle based on sensor fusion using model predictive control
(2019)
In this thesis a control and steering module for an autonomous bicycle was developed. Based on sensor fusion and model predictive control, the module is able to trace routes autonomously.
The system is developed to run on a Raspberry Pi. An ultrasonic sensor and a 2D Lidar sensor are used for distance measurements. The vehicle’s position is determined by using GPS signals. Additionally, a camera is used to capture pictures for the roadside detection. In order to recognize the road and the position of the vehicle on it, computer vision techniques are used. The captured images are denoised, Canny edge detection is performed and a perspective transformation is applied. Thereafter a sliding window algorithm selects the edges belonging to the roadside and a second order polynomial is fitted to the selected data. Based on this, the road curvature and the lateral position of the vehicle on the road are calculated. The implemented software is thus able to detect straight and curved roads as well as the vehicle’s lateral offset.
A route planning module was implemented to navigate the vehicle from the start to the destination coordinates. This is done by creating an abstract graph of the roads and using Dijkstra’s algorithm to determine the shortest path.
Four MPC controllers were implemented to control the movements of the vehicle. They are based on state space equations derived from the linear single-track vehicle model. This relatively straightforward model makes it possible to predict the vehicle behavior and is efficient to compute. Each controller was built with different parameters for different vehicle speeds to account for the non-linearity of the system. The controllers simulate the future states of the system at each timeslot and select appropriate control signals for steering, throttle and brakes.
In this thesis, all the components of the steering and control module were individually validated. It was established that the each individual component works as expected and certain constraints and accuracy limits were identified. Finally, the closed loop capabilities of the system were assessed using a test vehicle. Despite some limitations imposed by this setup, it was shown that the control module is indeed capable of autonomously navigating a vehicle and avoiding collisions.
Computational workflow optimization for magnetic fluctuation measurements of 3D nano-tetrapods
(2021)
The detailed understanding of micro–and nanoscale structures, in particular their magnetization dynamics, dominates contemporary solid–state physics studies. Most investigations already identified an abundance of phenomena in one–and two–dimensional nanostructures. The following thesis focuses on the magnetic fingerprint of three–dimensional CoFe nano–magnets, specifically the temporal development of their hysteresis loop. These nano–magnets were grown in a tetrahedral pattern on top of a highly susceptible home–build GaAs/AlGaAs micro–Hall sensor using focused electron beam induced deposition (FEBID).
During the measurements, utmost efforts were employed to exemplify current best research practices. The data life cycle of the present thesis is based upon open–source data science tools and packages. Data acquisition and analysis required self–written automated algorithms to handle the extensive quantity of data. Existing instrumental-controlling software was improved, and new Python packages were devised to analyze and visualize the gathered data. The open–source Python data analysis framework (ana) was developed to facilitate computational reproducibility. This framework transparently analyses and visualizes the gathered data automatically using Continuous Analysis tools based on GitLab and Continuous Integration. This automatization uses bespoke scripts combined with virtualization tools like Docker to facilitate reproducible and device–independent results.
The hysteresis loops reveal distinct differences in subsequently measured loops with identical initial experimental parameters, originating from the nano–magnet’s magnetic noise. This noise amplifies in regions where switching processes occur. In such noise–prone regions, the time–dependent scrutinization reveals presumably thermally induced metastable magnetization states. The frequency–dependent power spectral density uncovers a characteristic 1/f² behavior at noise–prone regions with metastable magnetization states.
The internet has often been considered a 'technology of freedom' – a nearly revolutionary tool believed to flatten social hierarchies and democratize access to media by 'giving voice' to everybody equally. Contradictory to this point of view, research has shown the existence of a 'digital divide,' the phenomenon that access to and use of the internet, as well as the outcomes derived from this use, correlate with pre-existing inequalities.
Based on ethnographic fieldwork among activists in Dakar, Senegal, this thesis analyzes how inequalities shape and are shaped by the relationships between activists and smartphones. Do smartphones indeed flatten social hierarchies, or are inequalities rather reproduced – or even reinforced – through them?
Frankfurt as a global international city is home to transcultural people with diverse linguistic biographies and migration backgrounds. As teachers exert significant influence on the language practice of their students and their awareness of self and others, it is crucial to examine the language ideologies and attitudes on multilingualism of teachers who work in different schools in Frankfurt. The online questionnaire was selected as the data collection
method for the combination of qualitative and quantitative analysis where teachers were asked to select their opinion on statements that were designed to represent concurring viewpoints of separate bilingualism and flexible bilingualism. The study builds on existing evidence that multiple factors dynamically shape teachers' attitudes towards multilingualism.
School-level support and cooperation between educational institutions seems to be necessary to establish horizontal continuity and help students benefit from language-sensitive didactic methods, such as translanguaging.
During RUN3 (2021-2023) of the Large Hadron Collider, the Time Projection Chamber (TPC) of ALICE will be operated with quadruple stacks of Gas Electron Multipliers (GEMs). This technology will allow to overcome the rate limitation due to the gated operation of the Multi-Wire Proportional Chambers (MWPCs) used in RUN1 (2009-2013) and RUN2 (2015-2018).
As part of the Upgrade project, long-term irradiation tests, so called "ageing tests", have been carried out. A test setup with a detector using a quadruple stack of 10x10cm2 GEMs was built and operated in Ar-CO2 and Ne-CO2-N2 gas mixtures. The detector performance such as gas gain and energy resolution were monitored continuously. In addition, outgassing tests of materials used for the assembly process of the upgraded TPC were performed. To reach the expected dose of the GEM-based TPC, the detector was operated at much higher gains than the TPC. It was found, that the GEMs could keep their performance within the projected lifetime of the TPC. Most of the tested materials showed no negative impact on the detector. For the tested epoxy adhesive no certain conclusion could be drawn.
At much higher doses than expected for the upgraded TPC, a new phenomenon was observed, which changed the hole geometry of the GEMs and led to a degradation of the energy resolution. Even though its occurrence is not expected during the lifetime of the GEM-based TPC, simulations were carried out to study this effect more systematically. The simulations confirmed, that a change of the hole geometries of the GEMs, lead to an increase of the local gain variation, which results in a decrease of the energy resolution.
Furthermore the effect of methane as quench gas on GEMs was studied, even though this gas is not foreseen to be used in the TPC. From ageing tests with single-wire proportional counters it is well known that hydrocarbons are produced in the plasma of the avalanches, which cover the electrodes and lead to a degradation of the detector performance. Even though GEMs have a quite different geometry, the ageing tests showed, that also this technology tends to methane-induced ageing. A loss of gas gain as well as a degradation of the energy resolution due to deposits on the electrodes was monitored. A qualitative and quantitative comparison between ageing in GEMs and proportional counters was performed.
Software updates are a critical success factor in mobile app ecosystems. Through publishing regular updates, platform providers enhance their operating systems for the benefit of both end users and third-party developers. It is also a way of attracting new customers. However, this platform evolution poses the risk of inadvertently introducing software problems, which can severely disturb the ecosystem’s balance by compromising its foundational technologies. So far, little to no research has addressed this issue from a user-centered perspective. The thesis at hand draws on IS post-adoption literature to investigate the potential negative influences of operating system updates on mobile app users. The release of Apple’s iOS 13 update serves as research object. Based on over half a million user reviews from the AppStore, data mining techniques are applied to study the impact of the new platform version. The results show that iOS 13 caused complications with a large number of popular apps, leading to a significant decline in user ratings and an uptrend in negative sentiment. Feature requests, functional complaints, and device compatibility are identified as the three major issue categories. These issue types are compared in terms of their quantifiable negative effect on users’ continuance intention. In essence, the findings contribute to IS research on post-adoption behavior and provide guidance to ecosystem participants in dealing with update-induced platform issues.
In thesis I investigate the possibility that at the smallest length scale (Planck scale) the very notion of "dimension" needs to be revisited. Due to "quantum effects" spacetime might become very turbulent at these scales and properties like those of "fractals" emerge, including a "scale dependent dimension". It seems that this "spontaneous dimensional reduction" and the appearance of a minimal physical length are very general effects that most approaches to quantum gravity share. Main emphasis is given to the"spectral dimension" and its calculation for strings and p-branes.
Virtual machines are for the most part not used inside of high-energy physics (HEP) environments. Even though they provide a high degree of isolation, the performance overhead they introduce is too great for them to be used. With the rising number of container technologies and their increasing separation capabilities, HEP-environments are evaluating if they could utilize the technology. The container images are small and self-contained which allows them to be easily distributed throughout the global environment. They also offer a near native performance while at the same time aproviding an often acceptable level of isolation. Only the needed services and libraries are packed into an image and executed directly by the host kernel. This work compared the performance impact of the three container technologies Docker, rkt and Singularity. The host kernel was additionally hardened with grsecurity and PaX to strengthen its security and make an exploitation from inside a container harder. The execution time of a physics simulation was used as a benchmark. The results show that the different container technologies have a different impact on the performance. The performance loss on a stock kernel is small; in some cases they were even faster than no container. Docker showed overall the best performance on a stock kernel. The difference on a hardened kernel was bigger than on a stock kernel, but in favor of the container technologies. rkt showed performed in almost all cases better than all the others.
In this thesis, Planck size black holes are discussed. Specifically, new families of black holes are presented. Such black holes exhibit an improved short scale behaviour and can be used to implement gravity self-complete paradigm. Such geometries are also studied within the ADD large extra dimensional scenario. This allows black hole remnant masses to reach the TeV scale. It is shown that the evaporation endpoint for this class of black holes is a cold stable remnant. One family of black holes considered in this thesis features a regular de Sitter core that counters gravitational collapse with a quantum outward pressure. The other family of black holes turns out to nicely fit into the holographic information bound on black holes, and lead to black hole area quantization and applications in the gravitational entropic force. As a result, gravity can be derived as emergent phenomenon from thermodynamics.
The thesis contains an overview about recent quantum gravity black hole approaches and concludes with the derivation of nonlocal operators that modify the Einstein equations to ultraviolet complete field equations.