Refine
Year of publication
Document Type
- Master's Thesis (42) (remove)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- WaterGAP (2)
- global water model (2)
- (n (1)
- AI Safety (1)
- ALICE (1)
- Activism (1)
- App ecosystem (1)
- Autonomous Driving (1)
- Bayesian Inference (1)
- Beryllium-7 (1)
Institute
This study analyzes storyline structure in three Hausa home videos; Mai Kudi (The Rich Man), Sanafahna (with time truth shall dawn) and Albashi (Salary). The study measures storyline structure in these films against a Hollywood film industry model of story writing “the Hero's Journey”. It uses narrative analysis as its analytical tool, and narrative theory as its framework. After analyzing these videos, the study found that the major elements of storyline structure in Vogler's model formed the framework of the storyline structure in Hausa home videos analyzed. However, in spite of the preponderance of these elements within the storyline structure, there are significant variations to Vogler's model. Specifically, Vogler's model has some twelve stages spread on the universal structure of storytelling, i.e. beginning, middle and end. Few of these stages were found to exist in Hausa narrative structure, perhaps due to cultural differences between Western, Indian and Hausa cultures. The study therefore recommends screenwriters and producers to be aware of the existence of standard models of scriptwriting. It also recommends more training for script writers in the Hausa film industry.
WaterGAP (Water - Global Assessment and Prognosis) is a tool for modeling global water use and water availability. It participates among other models in the ISIMIP initiative (The Inter-Sectoral Impact Model Intercomparison Project). As part of this initiative, the water temperature should be calculated by participating hydrological models because it plays a vital role in many chemical, physical and biological processes. Therefore, the subject of this master thesis is to implement the physically based surface water temperature computation after VAN BEEK ET AL. (2012) and WANDERS ET AL. (2019) into WaterGAP and compare the results to the statistical regression approach by PUNZET ET AL. (2012). The computation is validated with observed water temperature data obtained from the GEMStat water quality database. The results are good for arctic and temperate latitudes. Surface water temperatures for tropical rivers are overestimated, most likely due to the overestimation of precipitation temperatures, incoming radiation and groundwater temperatures. The comparison with the regression model by PUNZET ET AL. (2012) shows matching results. The regression model even matches with WaterGAP results for most of the simulations of the future under climate change conditions, where the regression model should stop working due to changing environmental parameters. Several assumptions had to be made in order to implement the water temperature calculation in Water-GAP. These include, e.g., discharge temperatures for power plant cooling water, precipitation and surface runoff temperatures. For model improvements, perhaps three different values for the different regions of the world should be used to cool down the precipitation and surface runoff. The model could also be improved by refining the ice formation calculation, especially for the conditions when the ice melts, breaks up and is transported downstream. Furthermore, the feedback to the river channel roughness could be implemented if ice has formed. The WaterGAP model upgraded with the water temperature calculation will help the ISIMIP initiative in the future.
The ALICE Time Projection Chamber (TPC) is the main tracking detector of ALICE which was designed to perform well at multiplicities of up to 20000 charged primary and secondary tracks emerging from Pb-Pb collisions. Successful operation of such a large and complex detector requires an elaborate calibration and commissioning. The main goal for the calibration procedures is to provide the information needed for the offline software for the reconstruction of the particle tracks with sufficient precision so that the design performance can be achieved. For a precise reconstruction of particle tracks in the TPC, the calibration of the drift velocity, which in conjunction with the drift time provides the z position of the traversing particles, is essential. In this thesis, an online method for the calibration of the drift velocity is presented. It uses the TPC Laser System which generates 336 straight tracks within the active volume of the TPC. A subset of these tracks, showing sufficiently small distortions, is used in the analysis. The resulting time dependent drift velocity correction parameters are entered into a database and provide start values for the offline reconstruction chain of ALICE. Even though no particle tracking information is used, the online drift velocity calibration is in agreement with the full offline calibration including tracking on the level of about 2 x 10 exp (-4). In chapter 2, a short overview of the ALICE detector, as well as the data taking model of the ALICE, is given. In chapter 3, the TPC detector is described in detail. Lastly in chapter 4, the online drift velocity calibration method is presented, together with a detailed description of the TPC laser system.
The aim of this thesis is finding a geometric configuration that allows electron insertion into a Gabor plasma lens in order to increase the density of the confined electrons and provide ignition conditions at parameters where ignition is not possible. First, simulations using CST and bender were conducted to investigate several geometric configurations in terms of their performance of inserting electrons manually. One particular design has been chosen as a basis for an experiment. In order to prepare the experiment, further simulations using the code bender have been conducted to investigate the density distribution that is formed inside the Gabor lens when inserting electrons transversally in compliance with the chosen design. Additionally, bender was used to investigate the impact of the initial electron energy on the distribution inside the lens. Simulations with and without space charge effects have shown a significant impact of the space charge effects on the resulting density dstribution. Therefore, space charge effects have proven to be the major electron redistribution process. A given electron source was characterised in order to find the performance under the conditions inside a Gabor lens. In particular, a transversal magnetic field that will be present in the experiment has to be compensated by shielding the inner regions of the source by a μ-metal layer. Using a μ-metal shield, transversal magnetic fields are sufficiently tolerable to perform measurements in a Gabor lens. Additionally, operating close to 100 eV electron energy yields a maximum in the emitted current. Adding a Wehnelt cylinder to the electron source furthermore improves the extracted current to roughly 1 mA. A test stand consisting of a newly designed anode for the Gabor lens, as well as a terminal for the electron source, was constructed. The electron source was thoroughly characterised in the environment of the Gabor lens and the ignition properties of the new system were evaluated. In further experiments, electron beam assisted ignition by increasing the residual gas pressure was observed and the impact of the position of the electron source on the ignition properties was investigated. In addition, ignition of a sub-critical state, that is a state consisting of potential, magnetic field and pressure that did not yet perform ignition by itself, was performed by increasing the extracted current from the electron source. Finally, the electron source was used to influence a pre-ignited plasma. The density was measured, which was increased by the use of the electron source in most cases. This project is part of the EDEN collaboration (Electron DENsity boosting) of the NNP Group at IAP Frankfurt with INFN institutes in Bologna and Catania.
Electron identification with a likelihood method and measurements of di-electrons for the CBM-TRD
(2017)
In this work a likelihood method has been implemented and investigated as particle identification algorithm for the CBM-TRD.
The creation of the probability distributions for the likelihood method via V0-topologies seems to be feasible and the purity of the obtained samples is sufficient for the usage in the likelihood method.
The comparison between the ANN and the likelihood method shows no differences in the identification performance. The pion suppression factor reaches the same values for the same electron identification efficiencies and the yields of the resulting di-lepton signals are comparable. The signal-to-background ratios for both methods have the same values and show a value of about 10−2 in the invariant mass range of minv = 1.5 - 2.5 GeV/c2, which is expected to be sufficient to provide access to the thermal in-medium and QGP radiation.
The investigation of a detector system without a TRD shows no pion suppression for a momentum above p = 6 GeV/c. Therefore, the background contributions increase drastically and the signal-to-background ratio decreases at all invariant masses, but especially in the invariant mass range of minv = 1.5 - 2.5 GeV/c2.
The background contributions in the invariant mass range of minv = 1.5 - 2.5 GeV/c 2 are also influenced by the selected electron identification efficiency of the TRD, which significantly shifts the fraction of the eπ contributions relative to the total number of pairs.
Origin of the German Novel
(1927)
Virtual machines are for the most part not used inside of high-energy physics (HEP) environments. Even though they provide a high degree of isolation, the performance overhead they introduce is too great for them to be used. With the rising number of container technologies and their increasing separation capabilities, HEP-environments are evaluating if they could utilize the technology. The container images are small and self-contained which allows them to be easily distributed throughout the global environment. They also offer a near native performance while at the same time aproviding an often acceptable level of isolation. Only the needed services and libraries are packed into an image and executed directly by the host kernel. This work compared the performance impact of the three container technologies Docker, rkt and Singularity. The host kernel was additionally hardened with grsecurity and PaX to strengthen its security and make an exploitation from inside a container harder. The execution time of a physics simulation was used as a benchmark. The results show that the different container technologies have a different impact on the performance. The performance loss on a stock kernel is small; in some cases they were even faster than no container. Docker showed overall the best performance on a stock kernel. The difference on a hardened kernel was bigger than on a stock kernel, but in favor of the container technologies. rkt showed performed in almost all cases better than all the others.
Goal-Conditioned Reinforcement Learning (GCRL) is a popular framework for training agents to solve multiple tasks in a single environment. It is cru- cial to train an agent on a diverse set of goals to ensure that it can learn to generalize to unseen downstream goals. Therefore, current algorithms try to learn to reach goals while simultaneously exploring the environment for new ones (Aubret et al., 2021; Mendonca et al., 2021). This creates a form of the prominent exploration-exploitation dilemma. To relieve the pres- sure of a single agent having to optimize for two competing objectives at once, this thesis proposes the novel algorithm family Goal-Conditioned Re- inforcement Learning with Prior Intrinsic Exploration (GC-π), which sep- arates exploration and goal learning into distinct phases. In the first ex- ploration phase, an intrinsically motivated agent explores the environment and collects a rich dataset of states and actions. This dataset is then used to learn a representation space, which acts as the distance metric for the goal- conditioned reward signal. In the final phase, a goal-conditioned policy is trained with the help of the representation space, and its training goals are randomly sampled from the dataset collected during the exploration phase. Multiple variations of these three phases have been extensively evaluated in the classic AntMaze MuJoCo environment (Nachum et al., 2018). The fi- nal results show that the proposed algorithms are able to fully explore the environment and solve all downstream goals while using every dimension of the state space for the goal space. This makes the approach more flexible compared to previous GCRL work, which only ever uses a small subset of the dimensions for the goals (S. Li et al., 2021a; Pong et al., 2020).
Cleaning an ion beam from unwanted fractions is crucial for intense ion beams. This thesis will explore separation methods using a collimation channel, electric and magnetic dipoles and a velocity selector for low intensity beams on an experimental basis. In addition, statistical data of degassing events during the commissioning of a pentode extraction system for beam energies from 20 - 120keV will be presented.