Refine
Year of publication
Document Type
- Bachelor Thesis (25) (remove)
Language
- English (25) (remove)
Has Fulltext
- yes (25)
Is part of the Bibliography
- no (25) (remove)
Keywords
- (n, gamma) Reaktionen (1)
- (n, gamma) reactions (1)
- 20th century (1)
- Alemannic dialects (1)
- Alsace (1)
- BaF2 Detektor-Array (1)
- BaF2 detector array (1)
- Biophysical Chemistry (1)
- Charge change (1)
- Chatbot (1)
Institute
- Physik (8)
- Informatik und Mathematik (7)
- Informatik (3)
- Wirtschaftswissenschaften (2)
- Biowissenschaften (1)
- Geowissenschaften (1)
- MPI für Biophysik (1)
- Mathematik (1)
- Sprachwissenschaften (1)
In online video games toxic interactions are very prevalent and often
even considered an imperative part of gaming.
Most studies analyse the toxicity in video games by analysing the messages that are sent during a match, while only a few focus on other interactions. We focus specifically on the in-game events to try to identify toxic matches, by constructing a framework that takes a list of time-based events and projects them into a graph structure which we can then analyse with current methods in the field of graph representation learning.
Specifically we use a Graph Neural Network and Principal Neighbour-
hood Aggregation to analyse the graph structure to predict the toxicity of a match.
We also discuss the subjectivity behind the term toxicity and why the
process of only analysing in-game messages with current state-of-the-art NLP methods isn’t capable to infer if a match is perceived as toxic or not.
Supermassive black hole binaries (SMBHBs) are among the most powerful known sources of gravitational waves (GWs). Accordingly, these systems could dominate the stochastic gravitational wave background (GWB) in the micro- and millihertz frequency range. The time until the merger of two SMBHs in the nucleus of a galaxy can be shortened through dynamical friction due to the presence of dark matter (DM) spikes around the SMBHs. To calculate the orbital evolution of individual SMBHBs within the Newtonian approximation, the SMBHBpy code is developed. This work confirms that the GW signals from SMBHBs with DM spikes can be clearly distinguished from those without any matter. Making use of the upper limit on the characteristic strain of the GWB derived from the data of the Cassini spacecraft mission in 2001/2002, a lower limit on the matter density around SMBHBs is derived in this study. The result is subsequently compared with the theoretical density profiles for cold dark matter and self-interacting dark matter spikes.
Large language models have become widely available to the general public, especially due to ChatGPT's release. Consequently, the AI community has invested much effort into recreating language models of the same caliber as ChatGPT, since the latter is still a technical blackbox. This thesis aims to contribute to that cause by proposing R.O.B.E.R.T., a Robotic Operating Buddy for Efficiency, Research and Teaching. In doing so, it presents a first implementation of a lightweight environment which produces tailor-made, instruction-following language models with a heavy focus on conversational capabilities that instruct themselves into a given domain-context. Within this environment, the generation of datasets, the fine-tuning process and finally the inference of a unique R.O.B.E.R.T. instance are all carried out as part of an automated pipeline.
As part of the research for this thesis, a momentum spectrometer was set up and initial measurements on accelerated ions were performed. For this purpose, the necessary hardware for the operation of the spectrometer and for high-precision measurements was were assembled. A control system for remote operation was developed and the spectrometer was installed at the used beamline.
There, measurements of low-energy ion beams in superposition with electrons confined in a Gabor lens can be carried out.
Investigations were made on both the Gabor lens-generated ions and the beam ions, leading to first results regarding the charge changes of beam ions during propagation through an electron atmosphere.
Debate topic expansion
(2022)
Given a debate topic, it is often to make an expansion of the topic, the reasons can be the followings: (1) The scope of the debate topic is too shallow and we eager to discuss more. (2) A debate topic is sometimes related to the others and the discussion will not be complete when we do not discuss the others as well. (3) We may want to discuss the particular concept or the core the debate topic. It's thus meaningful to build a model in order to find the expansions of the topics.
IBM Research Team has proposed a method to expand the boundary and find the expansion topics of the given debate topics in 2019. There are two types of topic expansions in their paper, consistent and contrastive expansions. We focus on the consistent expansions. Consistent expansions are defined as the expansions that expand our topics in a positive way or at least neutral.
The main objective of this paper is to follow and examine the steps of IBM Research Team's idea and since the original discusses the model in english, we would like to implement a topic expansion model with 7 steps, including pattern extraction, filtering, training, etc, in another language (german) using translator and compare the result between different models to propose the final german model at the end.
Reproducible annotations
(2022)
This bachelor thesis presents a software solution which implements reproducible annotations in the context of the UIMA framework. This is achieved by creating an automated containerization of arbitrary analysis engines and annotating every analysis engine configuration in the processed CAS document. Any CAS document created by this solution is self sufficient and able to reproduce the exact environment under which it was created.
A review of the state-of-the art software in the field of UIMA reveals that there are many implementations trying to increase reproducibility for a given application relying on UIMA, but no publication trying to increase the reproducibility of UIMA itself. This thesis improves upon that technological gap and provides a throughout analysis at the end which shows a negligible overhead in memory consumption, but a significant performance regression depending on the complexity of the analysis engine which was examined.
When we browse via WiFi on our laptop or mobile phone, we receive data over a noisy channel. The received message may differ from the one that was sent originally. Luckily it is often possible to reconstruct the original message but it may take a lot of time. That’s because decoding the received message is a complex problem, NP-hard to be exact. As we continue browsing, new information is sent to us in a high frequency. So if lags are to be avoided and as memory is finite, there is not much time left for decoding. Coding theory tackles this problem by creating models of the channels we use to communicate and tailor codes based on the channel properties. A well known family of codes are Low-Density Parity-Check codes (LDPC codes), they are widely used in standards like WiFi and DVB-T2. In practical settings the complexity of decoding a received message can be heavily reduced by using LDPC codes and approximative decoding algorithms. This thesis lays out the basic construction of LDPC codes and a proper decoding using the sum-product algorithm. On this basis a neural network to improve decoding is introduced. Therefore the sum-product algorithm is transformed into a neural network decoder. This approach was first presented by Nachmani et al. and treated in detail by Navneet Agrawal in 2017. To find out how machine learning can improve the codes, the bit error rates of the trained neural network decoder are compared with the bit error rates of the classic sum-product algorithm approach. Experiments with static and dynamic training datasets of diverse sizes, various signal-to-noise ratios, a feed forward as well as a recurrent architecture show how to tune the neural network decoder even further. Results of the experiments are used to verify statements made in Agrawal’s work. In addition, corrections and improvements in the area of metrics are presented. An implementation of the neural network to facilitate access for others will be made available to the public.
The aim of this bachelor thesis is to compare and empirically test the use of classification to improve the topic models Latent Dirichlet Allocation (LDA) and Author Topic Modeling
(ATM) in the context of the social media platform Twitter. For this purpose, a corpus was classified with the Dewey Decimal Classification (DDC) and then used to train the topic models. A second dataset, the unclassified corpus, was used for comparison. The assumption that the use of classification could improve the topic models did not prove true for the LDA topic model. Here, a sufficiently good improvement of the models could not be achieved. The ATM model, on the other hand, could be improved by using the classification. In general, the ATM model performed significantly better than the LDA model. In the context of the social media platform Twitter, it can thus be seen that the ATM model is superior to the LDA model and can additionally be improved by classifying the data.
Over the course of the last financial crises, retail investors have been identified to bear a major share of the invoked financial losses. As a consequence, financial market regulators put major effort on retail investor protection, especially following the Great Financial Crisis of 2007-2009. The major legislative initiatives, such as in the Dodd-Frank Act in the United States, seemingly manifest retail investors’ overly fragile role among the variety of professional investors in the financial market by establishing additional protection requirements for retail investors. A vast majority of related international academic literature is supporting those steps. However, considering the most recent developments that occurred in the US financial markets, the dogma of the lamb-like retail investor seems to be crumbling: In 2021, under the synonym “WallStreetBets” retail investors systematically colluded in investment bets which eventually disrupted not only financial markets by distorting stock price formation of single firms but also systematically squeezed sizeable positions of institutional investors. The key question arises, how retail investors have changed, such that they not only became a source of price distortions and market turmoil but also endanger professional institutional investors. In this thesis, I study this changing role and investment behavior of retail investors, taking into account the retail investor’s wellestablished and researched behavioral characteristics to the changing environmental aspects such as regulation and the adaption and usage of technology for information gathering and collaboration. Based on the combination of those different research streams, I am able to deduct the sequential consequences of these developments for financial markets.
Principles of cognitive maps
(2021)
This thesis analyses the concept of a cognitive map in the research fields of geography. Cognitive mapping research is essential as it investigates the relations between cognitive maps and external representations of space that people regularly use by acquiring spatial knowledge, such as maps in geographic information systems. Moreover, cognitive maps, when expanded on semantic maps, explain the relations between people and things in a non-physically environment, where the considered space is not spanned by distance but with other non-spatially variables. Nevertheless, cognitive maps are often distorted. Although a good formation of a cognitive map is vital in navigation processes, cognitive distortions are barely investigated in the field of geography. By analyzing the relevant work, especially Tobler’s first law of geography, a new lexical variant of Tobler’s first law could be stated that could presumably describe a specific distortion in the processing of landmarks in cognitive maps.