Bachelor Thesis
Refine
Year of publication
Document Type
- Bachelor Thesis (29) (remove)
Language
- English (29) (remove)
Has Fulltext
- yes (29)
Is part of the Bibliography
- no (29)
Keywords
- NLP (3)
- Classification (2)
- (n, gamma) Reaktionen (1)
- (n, gamma) reactions (1)
- 20th century (1)
- Alemannic dialects (1)
- Alsace (1)
- Animal rights (1)
- BaF2 Detektor-Array (1)
- BaF2 detector array (1)
Institute
- Informatik und Mathematik (8)
- Physik (8)
- Informatik (5)
- Wirtschaftswissenschaften (2)
- Biowissenschaften (1)
- Geowissenschaften (1)
- Gesellschaftswissenschaften (1)
- MPI für Biophysik (1)
- Mathematik (1)
- Sprachwissenschaften (1)
Animal agriculture is responsible for at least 16.5% of global yearly CO2e (carbon dioxide equivalent) emissions (Twine 2021: 3) and thus partially causal for the corresponding climate change, and its disastrous consequences for millions (Romanello et al. 2023: 1-2). At the same time, animal agriculture restricts and damages the bodily autonomy of animals regularly (Hampton et al. 2021: 28) which could be unethical depending on the underlying ethical theory. The policy option of veganism by law is, nevertheless, rarely considered. The definitions of veganism range from an individual ethic of the abstention from consuming animal products to a political philosophy calling for the abolition of animal agriculture (Mancilla 2016: 1-3). Because veganism through the cessation of animal agriculture could be the policy solution to the aforementioned issues concerning the rights of present and future generations affected by climate change and the rights of animals, I explore arguments for and against the implementation of veganism by law.
Although a veganized agriculture would provide 52% of the required emission reductions for the 2°C target of the Paris climate accord (Eisen and Brown 2022: 6), and could allow for greater animal welfare, current policies of many governments promote the opposite. For example, 82% of the subsidies of the European Union’s Common Agricultural Policy are routed towards the production of animal products and animal feed (Kortleve et al. 2024: 1-2). Moreover, for American adults the U.S. Department of Agriculture and the U.S. Department of Health and Human Services (2020: 96) promotes the consumption of 720ml of cow milk or other dairy per day and recommends a protein intake through meat and eggs between 652 g and 936 g per week.
In this bachelor thesis I outline the current state of animal agriculture, its emissions and the associated harm towards animals and humans. The empirical findings are dissected ethically with a consequentialist approach and a deontological approach. The ethical analysis concerning the decisions of individuals is then converted into a political philosophy regarding the duties of states towards present and future generations and animals including corresponding policy implications.
The normative argument is mainly based on the example of industrialized animal agricul-ture, the area where most of the interaction between animals and humans occurs. Nevertheless, other sectors where animals are used for human consumption or entertainment are discussed in less detail, in order to analyze the arguments for veganism by law.
In short, using the recommended political argument structure of Abel et al. (2021: 6) the following hypothesis acts as the basis for the political and philosophical discussion and is revised where necessary:
Moral claims: The state should protect present and future generations and animal rights.
Empirical claims: Animal agriculture is a major contributor to climate change and its corresponding effects and violates the wellbeing of animals regularly.
Conclusion: The state should enforce veganism by law.
This bachelor thesis developed a pipeline for automatic processing of scanned hospital letters: HospLetExtractor. Hospital letters can contain valuable information about potential adverse drug reactions and useful case information relevant to pharmacovigilance. To make this data accessible, this thesis presents a pipeline consisting of image pre-processing, optical character recognition and post-processing. Pre-processing deskews the images, removes lines and rectangles, reduces noise and applies super-resolution. For the post-processing a spell checking system was set up including a newly built word frequency dictionary for german medical terms based on a created corpus of german medical texts. Furthermore, classical and deep learning models for the classification of hospital letters were compared, in which the transformer-based models performed best. In order to train and test the models, a new gold standard was created. By making these medical documents accessible for automatic analysis, hopefully a contribution can be made to expand the scope of pharmacovigilance.
Natural Language Processing (NLP) for big data requires an efficient and sophisticated infrastructure to complete tasks both fast and correctly. Providing an intuitive and lightweight interaction with a framework that abstracts and simplifies complex tasks assists in reaching this goal. This bachelor thesis extends the NLP framework Docker Unified UIMA Interface (DUUI) by an API and a web-based graphical user interface to control and manage pipelines for automated analysis of large quantities of natural language. The extension aims to reduce the entry barrier into the field as well as to accelerate the creation and management of pipelines according to UIMA standards. Pipelines can be executed in the browser or using the web API directly and then monitored on a document level. The evaluation in usability and user experience indicates that the implementation benefits the framework by making its usage more user friendly, lightweight, and intuitive while also making the management of pipelines more efficient.
Assessing communicative accommodation in the context of large language models : a semiotic approach
(2023)
Recently, significant strides have been made in the ability of transformer-based chatbots to hold natural conversations. However, despite a growing societal and scientific relevancy, there are few frameworks systematically deriving what it means for a chatbot conversation to be natural. The present work approaches this question through the phenomenon of communicative accommodation/interactive alignment. While there is existing research suggesting that humans adapt communicatively to technologies, the aim of this work is to explore the accommodation of AI-chatbots to an interlocutor. Its research interest is twofold: Firstly, the structural ability of the transformer-architecture to support accommodative behavior is assessed using a frame constructed in accordance with existing accommodationtheories.
This results in hypotheses to be tested empirically. Secondly, since effective accommodation produces the same outcomes, regardless of technical implementation, a behavioral experiment is proposed. Existing quantifications of accommodation are reconciled,
extended, and modified to apply them to nonhuman-interlocutors. Thus, a measurement scheme is suggested which evaluates textual data from text-only, double-blind interactions between chatbots and humans, chatbots and chatbots and humans and humans. Using the generated human-to-human convergence data as a reference, the degree of artificial accommodation can be evaluated. Accommodation as a central facet of artificial interactivity can thus be evaluated directly against its theoretical paradigm, i.e. human interaction. In case that subsequent examinations show that chatbots effectively do not accommodate, there may be a new form of algorithmic bias, emerging from the aggregate accommodation towards chatbots but not towards humans. Thus, existing, hegemonic semantics could be cemented through chatbot-learning. Meanwhile, the ability to effectively accommodate would render chatbots vastly more susceptible to misuse.
In online video games toxic interactions are very prevalent and often
even considered an imperative part of gaming.
Most studies analyse the toxicity in video games by analysing the messages that are sent during a match, while only a few focus on other interactions. We focus specifically on the in-game events to try to identify toxic matches, by constructing a framework that takes a list of time-based events and projects them into a graph structure which we can then analyse with current methods in the field of graph representation learning.
Specifically we use a Graph Neural Network and Principal Neighbour-
hood Aggregation to analyse the graph structure to predict the toxicity of a match.
We also discuss the subjectivity behind the term toxicity and why the
process of only analysing in-game messages with current state-of-the-art NLP methods isn’t capable to infer if a match is perceived as toxic or not.
Supermassive black hole binaries (SMBHBs) are among the most powerful known sources of gravitational waves (GWs). Accordingly, these systems could dominate the stochastic gravitational wave background (GWB) in the micro- and millihertz frequency range. The time until the merger of two SMBHs in the nucleus of a galaxy can be shortened through dynamical friction due to the presence of dark matter (DM) spikes around the SMBHs. To calculate the orbital evolution of individual SMBHBs within the Newtonian approximation, the SMBHBpy code is developed. This work confirms that the GW signals from SMBHBs with DM spikes can be clearly distinguished from those without any matter. Making use of the upper limit on the characteristic strain of the GWB derived from the data of the Cassini spacecraft mission in 2001/2002, a lower limit on the matter density around SMBHBs is derived in this study. The result is subsequently compared with the theoretical density profiles for cold dark matter and self-interacting dark matter spikes.
Large language models have become widely available to the general public, especially due to ChatGPT's release. Consequently, the AI community has invested much effort into recreating language models of the same caliber as ChatGPT, since the latter is still a technical blackbox. This thesis aims to contribute to that cause by proposing R.O.B.E.R.T., a Robotic Operating Buddy for Efficiency, Research and Teaching. In doing so, it presents a first implementation of a lightweight environment which produces tailor-made, instruction-following language models with a heavy focus on conversational capabilities that instruct themselves into a given domain-context. Within this environment, the generation of datasets, the fine-tuning process and finally the inference of a unique R.O.B.E.R.T. instance are all carried out as part of an automated pipeline.
As part of the research for this thesis, a momentum spectrometer was set up and initial measurements on accelerated ions were performed. For this purpose, the necessary hardware for the operation of the spectrometer and for high-precision measurements was were assembled. A control system for remote operation was developed and the spectrometer was installed at the used beamline.
There, measurements of low-energy ion beams in superposition with electrons confined in a Gabor lens can be carried out.
Investigations were made on both the Gabor lens-generated ions and the beam ions, leading to first results regarding the charge changes of beam ions during propagation through an electron atmosphere.
Debate topic expansion
(2022)
Given a debate topic, it is often to make an expansion of the topic, the reasons can be the followings: (1) The scope of the debate topic is too shallow and we eager to discuss more. (2) A debate topic is sometimes related to the others and the discussion will not be complete when we do not discuss the others as well. (3) We may want to discuss the particular concept or the core the debate topic. It's thus meaningful to build a model in order to find the expansions of the topics.
IBM Research Team has proposed a method to expand the boundary and find the expansion topics of the given debate topics in 2019. There are two types of topic expansions in their paper, consistent and contrastive expansions. We focus on the consistent expansions. Consistent expansions are defined as the expansions that expand our topics in a positive way or at least neutral.
The main objective of this paper is to follow and examine the steps of IBM Research Team's idea and since the original discusses the model in english, we would like to implement a topic expansion model with 7 steps, including pattern extraction, filtering, training, etc, in another language (german) using translator and compare the result between different models to propose the final german model at the end.
Reproducible annotations
(2022)
This bachelor thesis presents a software solution which implements reproducible annotations in the context of the UIMA framework. This is achieved by creating an automated containerization of arbitrary analysis engines and annotating every analysis engine configuration in the processed CAS document. Any CAS document created by this solution is self sufficient and able to reproduce the exact environment under which it was created.
A review of the state-of-the art software in the field of UIMA reveals that there are many implementations trying to increase reproducibility for a given application relying on UIMA, but no publication trying to increase the reproducibility of UIMA itself. This thesis improves upon that technological gap and provides a throughout analysis at the end which shows a negligible overhead in memory consumption, but a significant performance regression depending on the complexity of the analysis engine which was examined.