Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (85)
- Article (54)
- Bachelor Thesis (7)
- Master's Thesis (6)
- Conference Proceeding (4)
- Habilitation (2)
- Preprint (2)
Language
- English (160) (remove)
Has Fulltext
- yes (160)
Is part of the Bibliography
- no (160)
Keywords
- ALICE (3)
- Machine Learning (3)
- mathematics education (3)
- Artificial intelligence (2)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- FPGA (2)
Institute
In online video games toxic interactions are very prevalent and often
even considered an imperative part of gaming.
Most studies analyse the toxicity in video games by analysing the messages that are sent during a match, while only a few focus on other interactions. We focus specifically on the in-game events to try to identify toxic matches, by constructing a framework that takes a list of time-based events and projects them into a graph structure which we can then analyse with current methods in the field of graph representation learning.
Specifically we use a Graph Neural Network and Principal Neighbour-
hood Aggregation to analyse the graph structure to predict the toxicity of a match.
We also discuss the subjectivity behind the term toxicity and why the
process of only analysing in-game messages with current state-of-the-art NLP methods isn’t capable to infer if a match is perceived as toxic or not.
Blockchains in public administration : a RADIUS on blockchain framework for public administration
(2023)
The emergence of blockchain technology has generated a great deal of attention, as reflected in numerous scientific and journalistic articles. However, the implementation of blockchain for public administrations in Germany has encountered a setback owing to unsuccessful initiatives. Initial enthusiasm was followed by disillusionment. Nevertheless, technology continues to evolve. This paper examines whether the use of a blockchain can still optimize the processes of public administrations. Not only the failed projects are analysed, but also more current applications of the technology and their potential relevance for the administration, especially in the state of Hesse.
To answer if blockchains are promising to administrations, a Design Science Research (DSR) research approach is chosen. The DSR method is a research-based approach that aims to create new and innovative solutions to real-world problems through the development and evaluation of artefacts such as models, methods, or prototypes. For this work, the implementation of a framework to realize an Authentication, Authorization, and Accounting (AAA) system on the blockchain was identified as profitable. The framework aims to implement the aforementioned AAA tasks using a blockchain. The Remote Authentication Dial-In User Service (RADIUS) protocol has been identified as a potential protocol of the AAA system. The goal is to create a way to implement the system either entirely on a blockchain or as a hybrid system. Various blockchain technologies will be considered. Suitable for development, the framework AAA-me is named.
The development of AAA-me has shown that the desired framework for implementing RADIUS on the blockchain is possible in various degrees of implementation. Previous work mostly relied on full development. Additionally, it has been shown that AAA-me can be used to perform hybrid integration at different implementation levels. This makes AAA-me stand out from the few hybrid previous approaches. Furthermore, AAA-me was investigated in different laboratory environments. This was to determine the expected resilience against Single Point of Failure (SPOF). The results of the lab investigation indicated that a RADIUS system on top of a blockchain can provide benefits in terms of security and performance. In the lab environment, times were measured within which a series of authorization requests were processed. In addition, it was illustrated how a RADIUS system implemented using blockchain can protect itself against Man-in-the-Middle (MITM) attacks.
Finally, in collaboration with the Hessian Central Office for Data Processing (German: Hessische Zentrale für Datenverarbeitung) (HZD), another test lab demonstrated how a RADIUS system on the blockchain can integrate with the existing IT systems of the German state of Hesse. Based on these findings, this work reevaluated the applicability of blockchain technology for public administration processes.
The work has thus shown that the use of a blockchain can still be purposeful. However, it has also been shown that an implementation can bring many problems with it. The small number of blockchain developers and engineers also poses the risk of finding people to develop and maintain a system. In addition, one faces the problem of determining an architecture now that will be applied to many projects in the future. However, each project can, in turn, have an impact on the choice of architecture. Once one has solved this problem and a blockchain infrastructure is available, it can be established quickly and be more SPOF resistant, for example, for Public Key Infrastructure (PKI) systems.
AAA-me was only applied in lab and test environments. As a result, no real data ran over its own infrastructure. This allowed the necessary flexibility for development. However, system-related properties could appear in real situations that are not detectable here in this way. Furthermore, the initial stage of AAA-me’s development is still in its infancy. Many manual adjustments need to be made in order for this to integrate with an existing RADIUS system. Also, no system security effort in and of itself has been carried out in the lab environments. Thus, vulnerabilities can quickly open up on web servers due to misconfigurations and missing updates. For the above reasons, productive use should be discouraged unless major developments are carried out.
We show explicit formulas for the evaluation of (possibly higher-order) fractional Laplacians (-△)ˢ of some functions supported on ellipsoids. In particular, we derive the explicit expression of the torsion function and give examples of s-harmonic functions. As an application, we infer that the weak maximum principle fails in eccentric ellipsoids for s ∈ (1; √3 + 3/2) in any dimension n ≥ 2. We build a counterexample in terms of the torsion function times a polynomial of degree 2. Using point inversion transformations, it follows that a variety of bounded and unbounded domains do not satisfy positivity preserving properties either and we give some examples.
A convex body is unconditional if it is symmetric with respect to reflections in all coordinate hyperplanes. We investigate unconditional lattice polytopes with respect to geometric, combinatorial, and algebraic properties. In particular, we characterize unconditional reflexive polytopes in terms of perfect graphs. As a prime example, we study the signed Birkhoff polytope. Moreover, we derive constructions for Gale-dual pairs of polytopes and we explicitly describe Gröbner bases for unconditional reflexive polytopes coming from partially ordered sets.
Adaptive, synchronous, and mobile online education: developing the ASYMPTOTE learning environment
(2022)
The COVID-19-induced distance education was perceived as highly challenging by teachers and students. A cross-national comparison of five European countries identified several challenges occurred during the distance learning period. On this basis, the article aims to develop a theoretical framework and design requirements for distance and online learning tools. As one example for online learning in mathematics education, the ASYMPTOTE system is introduced. It will be freely available by May 2022. ASYMPTOTE is aimed at the adaptive and synchronous delivery of online education by taking a mobile learning approach. Its core is the so-called digital classroom, which not only allows students to interact with each other or with the teacher but also enables teachers to monitor their students’ work progress in real time. With respect to the theoretical framework, this article analyses to what extent the ASYMPTOTE system meets the requirements of online learning. Overall, the digital classroom can be seen as a promising tool for teachers to carry out appropriate formative assessment and—partly—to maintain personal and content-related interaction at a distance. Moreover, we highlight the availability of this tool. Due to its mobile learning approach, almost all students will be able to participate in lessons conducted with ASYMPTOTE.
This dissertation is concerned with the task of map-based self-localization, using images of the ground recorded with a downward-facing camera. In this context, map-based (self-)localization is the task of determining the position and orientation of a query image that is to be localized. The map used for this purpose consists of a set of reference images with known positions and orientations in a common coordinate system. For localization, the considered methods determine correspondences between features of the query image and those of the reference images.
In comparison with localization approaches that use images of the surrounding environment, we expect that using images of the ground has the advantage that, unlike the surrounding, the visual appearance of the ground is often long-term stable. Also, by using active lighting of the ground, localization becomes independent of external lighting conditions.
This dissertation includes content of several published contributions, which present research on the development and testing of methods for feature-based localization of ground images. Our first contribution examines methods for the extraction of image features that have not been designed to be used on ground images. This survey shows that, with appropriate parametrization, several of these methods are well suited for the task.
Based on this insight, we develop and examine methods for various subtasks of map-based localization in the following contributions. We examine global localization, where all reference images have to be considered, as well as local localization, where an approximation of the query image position is already known, which allows for disregarding reference images with a large distance to this position.
In our second contribution, we present the first systematic comparison of state-of-the-art methods for ground texture based localization. Furthermore, we present a method, which is characterized by its usage of our novel feature matching technique. This technique is called identity matching, as it matches only those features with identical descriptors, in contrast to the state-of-the-art that also matches features with similar descriptors. We show that our method is well suited for global and local localization, as it has favorable scaling with the number of reference images considered during the localization process. In another contribution, we develop a variant of our localization method that is significantly faster to compute, as it applies a sampling approach to determine the image positions at which local features are extracted, instead of using classical feature detectors.
Two further contributions are concerned with global localization. The first one introduces a prediction model for the global localization performance, based on an evaluation of the local localization performance. This allows us to quickly evaluate any considered parameter settings of global localization methods. The second contribution introduces a learning-based method that computes compact descriptors of ground images. This descriptor can be used to retrieve the overlapping reference images of a query image from a large set of reference images with little computational effort.
The most recent contribution included in this dissertation presents a new ground image database, which was recorded with a dedicated platform using a downward-facing camera. In addition to the data, we also explain our guidelines for the construction of the platform. In comparison with existing databases, our database contains more images and presents a larger variety of ground textures. Furthermore, this database enables us to perform the first systematic evaluation of how localization performance is affected by the time interval between the point in time at which the reference images are recorded and the point in time at which the query image is recorded. We find out that for outdoor areas all ground texture based localization methods have reliability issues, if the time interval between the recording of the query and reference images is large, and also if there are different weather conditions. These findings point to remaining challenges in ground texture base localization that should be addressed in future work.
Matroids are combinatorial objects that generalize linear independence. A matroid can be represented geometrically by its Bergman fan and we compare the symmetries of these two objects. Sometimes, the Bergman fan has additional automorphisms, which are related to Cremona transformations in projective space. Their existence depends on a combinatorial property of the matroid, as has been shown by Shaw and Werner, and we study the consequences for the structure of such matroids. This allows us to gain a better understanding of the so-called Cremona group of a matroid and we apply our results to root system matroids.
FEM–BEM coupling for the thermoelastic wave equation with transparent boundary conditions in 3D
(2022)
We consider the thermoelastic wave equation in three dimensions with transparent boundary conditions on a bounded, not necessarily convex domain. In order to solve this problem numerically, we introduce a coupling of the thermoelastic wave equation in the interior domain with time-dependent boundary integral equations. Here, we want to highlight that this type of problem differs from other wave-type problems that dealt with FEM–BEM coupling so far, e.g., the acoustic as well as the elastic wave equation, since our problem consists of coupled partial differential equations involving a vector-valued displacement field and a scalar-valued temperature field. This constitutes a nontrivial challenge which is solved in this paper. Our main focus is on a coercivity property of a Calderón operator for the thermoelastic wave equation in the Laplace domain, which is valid for all complex frequencies in a half-plane. Combining Laplace transform and energy techniques, this coercivity in the frequency domain is used to prove the stability of a fully discrete numerical method in the time domain. The considered numerical method couples finite elements and the leapfrog time-stepping in the interior with boundary elements and convolution quadrature on the boundary. Finally, we present error estimates for the semi- and full discretization.
The development of epilepsy (epileptogenesis) involves a complex interplay of neuronal and immune processes. Here, we present a first-of-its-kind mathematical model to better understand the relationships among these processes. Our model describes the interaction between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model reproduces the available data from three animal models. The model successfully describes characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries or the existence of qualitatively different outcomes for varying injury intensity. In line with the concept of degeneracy, our simulations reveal multiple routes toward epilepsy with neuronal loss as a sufficient but non-necessary component. Finally, we show that our model allows for in silico predictions of therapeutic strategies, revealing injury-specific therapeutic targets and optimal time windows for intervention.
AttendAffectNet-emotion prediction of movie viewers using multimodal fusion with self-attention
(2021)
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension.
Linking mathematics with reality is not new. It is also not new to use outdoor activities to learn mathematics. It seems to be new, to combine such mathematical outdoor activities with mobile technology, like the geocache community which makes use of GPS technology to guide their members to special places and points of interest. The use of mobile technologies to learn at any time and any location is known as “mobile learning”. This type of learning can be seen as an extension of eLearning. Considering the definition of O’Malley one notices that this definition does not exactly match with the idea of the MathCityMap-Project (MCM), because the learning environment in the MCM-Project is predetermined. Combined with the math trail method the project enables mobile learning within math trails with latest technology.In the MCM-Project students experience mathematics at real places and within real situations in out-of-school activities,with help of GPS-enabled smartphones and special math problems. In contrast to the paper versions of math trails we are able to give direct feedback on the solutions by using “mobile devices” such as smartphones or tablets. If the user has difficulties in solving the modeling task, stepped hints can be provided. The teacher is able to use the MCM-Portal to upload tasks developed by himself or by his students and he is also able to build a personal math trail for his students.
The single-source shortest-path problem is a fundamental problem in computer science. We consider a generalization of the shortest-path problem, the $k$-shortest path problem. Let $G$ be a directed edge-weighted graph with $n$ nodes and $m$ edges and $s,t$ be two fixed nodes. The goal is to compute $k$ paths $P_1,\dots,P_k$ between two fixed nodes $s$ and $t$ in non-decreasing order of their length such that all other paths between $s$ and $t$ are at least as long as the $k$\nth path $P_k$. We focus on the version of the $k$-shortest path problem where the paths are not allowed to visit nodes multiple times, sometime referred to as $k$-shortest simple path problem.
The probably best known $k$-shortest path algorithm is Yen's algorithm. It has a worst-case time complexity of O(kn\cdot scp(n,m)), where scp(n,m) is the complexity of the single-source shortest-path algorithm used as a subroutine. In case of Dijkstra's algorithm scp(n,m) is O(m + n\log n). One of the more recent improvements of Yen's algorithm is by Feng.
Even though Feng's algorithm is much faster in practice, it has the same worst-case complexity as Yen's algorithm.
The main results presented in this thesis are upper bounds on the average-case of Yen's and Feng's algorithm, as well as practical improvements and a parallel implementation of Yen's and Feng's algorithms including these improvements. The implementation is publicly available under GPLv3 open source license.
We show in our analysis that Yen's algorithm has an average-case complexity of O(k \log(n)\cdot scp(n,m)) on G(n,p) graphs with at least logarithmic average-degree and random edge weights following a distribution with certain properties.
On G(n,p) graphs with constant to logarithmic average-degree and uniform random edge-weights over $[0;1]$, we show an average-case complexity of O(k\cdot\frac{\log^2 n}{np}\cdot scp(n,m)). Feng's algorithm has an even better average-case complexity of O(k\cdot scp(n,m)) on unweighted G(n,p) graphs with logarithmic average-degree and for constant values of $k$. We further provide evidence that the same holds true for G(n,p) graphs with uniform random edge-weights over $[0;1]$.
On the practical side, we suggest new heuristics to prune even more single-source shortest-path computations than Feng's algorithm and evaluate all presented algorithms on G(n,p) and Grid graphs with up to 256 million nodes. We demonstrate speedups by a factor of up to 40 compared to Feng's algorithm.
Finally we discuss two ways to parallelize the suggested algorithms and evaluate them on grid graphs showing speedups by a factor of 2 using 4 threads and by a factor of up to 8 using 16 threads, respectively.
Cyber Physical Systems (CPS) are growing more and more complex due to the availability of cheap hardware, sensors, actuators and communication links. A network of cooperating CPSs (CPN) additionally increases the complexity. This poses challenges as well as it offers chances: the increasing complexity makes it harder to design, operate, optimize and maintain such CPNs. However, on the other side an appropriate use of the increasing resources in computational nodes, sensors, actuators can significantly improve the system performance, reliability and flexibility. Therefore, self-X features like self-organization, self-adaptation and self-healing are key principles for such systems.
Additionally, CPNs are often deployed in dynamic, unpredictable environments and safety-critical domains, such as transportation, energy, and healthcare. In such domains, usually applications of different criticality level exist. In an automotive environment for example, the brake has a higher criticality level regarding safety as the infotainment. As a result of mixed-criticality, applications requiring hard real-time guarantees compete with those requiring soft real-time guarantees and best-effort application for the given resources within the overall system. This leads to the need to accommodate multiple levels of criticality while ensuring safety and reliability, which increases the already high complexity even more.
This thesis deals with the question on how to conveniently, effectively and efficiently handle the management and complexity of mixed-critical CPNs (MC-CPNs). Since this cannot be done by the system developer without the assistance of the system itself any longer, it is essential to develop new approaches and techniques to ensure that such systems can operate under a range of conditions while meeting stringent requirements.
Based on five research hypothesis, this thesis introduces a comprehensive adaptive mixed-criticality supporting middleware for Cyber-Physical Networks (Chameleon), which efficiently and autonomously takes care of the management and complexity of CPNs with regard to the mixed-criticality aspect.
Chameleon contributes to the state-of-art by introducing and combining the following concepts:
- A comprehensive self-adaption mechanism on all levels of the system model is provided.
- This mechanism allows a flexible combination of parametric and structural adaptation actions (relocation, scheduling, tuning, ...) to modify the behavior of the system.
- Real-time constraints of mixed-critical applications (hard real-time, soft real-time, best-effort) are considered in all possible adaptation conditions and actions by the use of the importance parameter.
- CPNs are supported by the introduction of different scopes (local, system, global) for the adaptation conditions and actions. This also enables the combination of different scopes for conditions and actions.
- The realization of the adaptation with a MAPE-K loop instantiated by a distributed LCS allows for real-time capable reasoning of adaptation actions which also works on resource-spare systems.
- The developed rule language Rango offers an intuitive way to specify an initial rule set for LCS in the context of CPS/CPNs and supports the system administrators in the process of rule set generation.
Proteins are biological macromolecules playing essential roles in all living organisms.
Proteins often bind with each other forming complexes to fulfill their function. Such protein complexes assemble along an ordered pathway. An assembled protein complex can often be divided into structural and functional modules. Knowing the order of assembly and the modules of a protein complex is important to understand biological processes and treat diseases related to misassembly.
Typical structures of the Protein Data Bank (PDB) contain two to three subunits and a few thousand atoms. Recent developments have led to large protein complexes being resolved. The increasing number and size of the protein complexes demand for computational assistance for the visualization and analysis. One such large protein complex is respiratory complex I accounting for 45 subunits in Homo sapiens.
Complex I is a well understood protein complex that served as case study to validate our methods.
Our aim was to analyze time-resolved Molecular Dynamics (MD) simulation data, identify modules of a protein complex and generate hypotheses for the assembly pathway of a protein complex. For that purpose, we abstracted the topology of protein complexes to Complex Graphs of the Protein Topology Graph Library (PTGL). The subunits are represented as vertices, and spatial contacts as edges. The edges are weighted with the number of contacts based on a distance threshold. This allowed us to apply graph-theoretic methods to visualize and analyze protein complexes.
We extended the implementations of two methods to achieve a computation of Complex Graphs in feasible runtimes. The first method skipped checks for contacts using the information which residues are sequential neighbors. We extended the method to protein complexes and structures containing ligands. The second method introduced spheres encompassing all atoms of a subunit and skipped the check for contacts if the corresponding spheres do not overlap. Both methods combined allowed skipping up to 93 % of the checks for contacts for sample complexes of 40 subunits compared to up to 10 % of the previous implementation. We showed that the runtime of the combined method scaled linearly with the number of atoms compared to a non-linear scaling of the previous implementation We implemented a third method fixing the assignment of an orientation to secondary structure elements. We placed a three-dimensional vector in each secondary structure element and computed the angle between secondary structure elements to assign an orientation. This method sped up the runtime especially for large structures, such as the capsid of human immunodeficiency virus, for which the runtime decreased from 43 to less than 9 hours.
The feasible runtimes allowed us to investigate two data sets of MD trajectories of respiratory complex I of Thermus thermophilus that we received. The data sets differ only by whether ubiquinone is bound to the complex. We implemented a pipeline, PTGLdynamics, to compute the contacts and Complex Graphs for all time steps of the trajectories. We investigated different methods to track changes of contacts during the simulation and created a heat map put onto the three-dimensional structure visualizing the changes. We also created line plots to visualize the changes of contacts over the course of the simulation. Both visualizations helped spotting outstandingly flexible or rigid regions of the structure or time points of the simulation in which major dynamics occur.
We introduced normalizations of the edge weights of Complex Graphs for identi-fying modules and predicting the assembly pathway. The idea is to normalize the number of contacts for the number of residues of a subunit. We defined five different normalizations.
To identify structural and functional modules, we applied the Leiden graph clustering algorithm to the Complex Graphs of respiratory complex I and the respiratory supercomplex. We examined the results for the different normalizations of the weights of the Complex Graphs. The absolute edge weight produced the best result identifying three of four modules that have been defined in the literature for respiratory complex I.
We applied agglomerative hierarchical clustering to the edges of a Complex Graph to create hypotheses of the assembly pathway. The rationale was that subunits with an extensive interface in the final structure assemble early. We tested our method against two existing methods on a data set of 21 proteins with reported assembly pathways. Our prediction outperformed the other methods and ran in feasible runtimes of a few minutes at most.
We also tested our method on respiratory complex I, the respiratory supercomplex and the respiratory megacomplex. We compared the results for the different normalizations with an assembly pathway of respiratory complex I described in the literature. We transformed the assembly pathways to dendrograms and compared the predictions to the reference using the Robinson-Foulds distance and clustering information distance. We analyzed the landscape of the clustering information distance by generating random dendrograms and showed that our result is far better than expected at random. We showed in a detailed analysis that the assembly prediction using one normalization was able to capture key features of the assembly pathway that has been proposed in the literature.
In conclusion, we presented different applications of graph theory to automatically analyze the topology of protein complexes. Our programs run in feasible runtimes even for large complexes. We showed that graph-theoretic modeling of the protein structure can be used to analyze MD simulation data, identify modules of protein complexes and predict assembly pathways.
The present paper is concerned with the half-space Dirichlet problem [...] where ℝ𝑁+:={𝑥∈ℝ𝑁:𝑥𝑁>0} for some 𝑁≥1 and 𝑝>1, 𝑐>0 are constants. We analyse the existence, non-existence and multiplicity of bounded positive solutions to (𝑃𝑐). We prove that the existence and multiplicity of bounded positive solutions to (𝑃𝑐) depend in a striking way on the value of 𝑐>0 and also on the dimension N. We find an explicit number 𝑐𝑝∈(1,𝑒√), depending only on p, which determines the threshold between existence and non-existence. In particular, in dimensions 𝑁≥2, we prove that, for 0<𝑐<𝑐𝑝, problem (𝑃𝑐) admits infinitely many bounded positive solutions, whereas, for 𝑐>𝑐𝑝, there are no bounded positive solutions to (𝑃𝑐).
Goal-Conditioned Reinforcement Learning (GCRL) is a popular framework for training agents to solve multiple tasks in a single environment. It is cru- cial to train an agent on a diverse set of goals to ensure that it can learn to generalize to unseen downstream goals. Therefore, current algorithms try to learn to reach goals while simultaneously exploring the environment for new ones (Aubret et al., 2021; Mendonca et al., 2021). This creates a form of the prominent exploration-exploitation dilemma. To relieve the pres- sure of a single agent having to optimize for two competing objectives at once, this thesis proposes the novel algorithm family Goal-Conditioned Re- inforcement Learning with Prior Intrinsic Exploration (GC-π), which sep- arates exploration and goal learning into distinct phases. In the first ex- ploration phase, an intrinsically motivated agent explores the environment and collects a rich dataset of states and actions. This dataset is then used to learn a representation space, which acts as the distance metric for the goal- conditioned reward signal. In the final phase, a goal-conditioned policy is trained with the help of the representation space, and its training goals are randomly sampled from the dataset collected during the exploration phase. Multiple variations of these three phases have been extensively evaluated in the classic AntMaze MuJoCo environment (Nachum et al., 2018). The fi- nal results show that the proposed algorithms are able to fully explore the environment and solve all downstream goals while using every dimension of the state space for the goal space. This makes the approach more flexible compared to previous GCRL work, which only ever uses a small subset of the dimensions for the goals (S. Li et al., 2021a; Pong et al., 2020).
Nowadays, digitalization has an immense impact on the landscape of jobs. This technological revolution creates new industries and professions, promises greater efficiency and improves the quality of working life. However, emerging technologies such as robotics and artificial intelligence (AI) are reducing human intervention, thus advancing automation and eliminating thousands of jobs and whole occupational images. To prepare employees for the changing demands of work, adequate and timely training of the workforce and real-time support of workers in new positions is necessary. Therefore, it is investigated whether user-oriented technologies, such as augmented reality (AR) and virtual reality (VR) can be applied “on-the-job” for such training and support—also known as intelligence augmentation (IA). To address this problem, this work synthesizes results of a systematic literature review as well as a practically oriented search on augmented reality and virtual reality use cases within the IA context. A total of 150 papers and use cases are analyzed to identify suitable areas of application in which it is possible to enhance employees' capabilities. The results of both, theoretical and practical work, show that VR is primarily used to train employees without prior knowledge, whereas AR is used to expand the scope of competence of individuals in their field of expertise while on the job. Based on these results, a framework is derived which provides practitioners with guidelines as to how AR or VR can support workers at their job so that they can keep up with anticipated skill demands. Furthermore, it shows for which application areas AR or VR can provide workers with sufficient training to learn new job tasks. By that, this research provides practical recommendations in order to accompany the imminent distortions caused by AI and similar technologies and to alleviate associated negative effects on the German labor market.
We present a symmetry result to solutions of equations involving the fractional Laplacian in a domain with at least two perpendicular symmetries. We show that if the solution is continuous, bounded, and odd in one direction such that it has a fixed sign on one side, then it will be symmetric in the perpendicular direction. Moreover, the solution will be monotonic in the part where it is of fixed sign. In addition, we present also a class of examples in which our result can be applied.