004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2021 (73) (remove)
Document Type
- Article (42)
- Doctoral Thesis (10)
- Preprint (8)
- Bachelor Thesis (6)
- Conference Proceeding (2)
- Contribution to a Periodical (2)
- Master's Thesis (2)
- Report (1)
Has Fulltext
- yes (73)
Is part of the Bibliography
- no (73)
Keywords
- artificial intelligence (3)
- machine learning (3)
- Annotation (2)
- Text2Scene (2)
- data science (2)
- healthcare (2)
- trustworthy AI (2)
- (re-)openings (1)
- 3D image analysis (1)
- AI fairness (1)
Institute
Background: The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data.
Results: QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data.
Conclusions: The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview.
AttendAffectNet-emotion prediction of movie viewers using multimodal fusion with self-attention
(2021)
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension.
Background: Clinical trial registries increase transparency in medical research by making information and results of planned, ongoing, and completed studies publicly available. However, the registration of clinical trials remains a time-consuming manual task complicated by the fact that the same studies often need to be registered in different registries with different data entry requirements and interfaces.
Objective: This study investigates how Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) may be used as a standardized format for exchanging and storing clinical trial records.
Methods: We designed and prototypically implemented an open-source central trial registry containing records from university hospitals, which are automatically exported and updated by local study management systems.
Results: We provided an architecture and implementation of a multisite clinical trials registry based on HL7 FHIR as a data storage and exchange format.
Conclusions: The results show that FHIR resources establish a harmonized view of study information from heterogeneous sources by enabling automated data exchange between trial centers and central study registries.
A deep convolutional neural network (CNN) is developed to study symmetry energy (Esym(ρ)) effects by learning the mapping between the symmetry energy and the two-dimensional (transverse momentum and rapidity) distributions of protons and neutrons in heavy-ion collisions. Supervised training is performed with labeled data-set from the ultrarelativistic quantum molecular dynamics (UrQMD) model simulation. It is found that, by using proton spectra on event-by-event basis as input, the accuracy for classifying the soft and stiff Esym(ρ) is about 60% due to large event-by-event fluctuations, while by setting event-summed proton spectra as input, the classification accuracy increases to 98%. The accuracies for 5-label (5 different Esym(ρ)) classification task are about 58% and 72% by using proton and neutron spectra, respectively. For the regression task, the mean absolute errors (MAE) which measure the average magnitude of the absolute differences between the predicted and actual L (the slope parameter of Esym(ρ)) are about 20.4 and 14.8 MeV by using proton and neutron spectra, respectively. Fingerprints of the density-dependent nuclear symmetry energy on the transverse momentum and rapidity distributions of protons and neutrons can be identified by convolutional neural network algorithm.
Because it is associated with central nervous changes, and olfactory dysfunction has been reported with increased prevalence among persons with diabetes, this study addressed the question of whether the risk of developing diabetes in the next 10 years is reflected in olfactory symptoms. In a cross-sectional study, in 164 individuals seeking medical consulting for possible diabetes, olfactory function was evaluated using a standardized clinical test assessing olfactory threshold, odor discrimination, and odor identification. Metabolomics parameters were assessed via blood concentrations. The individual diabetes risk was quantified according to the validated German version of the “FINDRISK” diabetes risk score. Machine learning algorithms trained with metabolomics patterns predicted low or high diabetes risk with a balanced accuracy of 63–75%. Similarly, olfactory subtest results predicted the olfactory dysfunction category with a balanced accuracy of 85–94%, occasionally reaching 100%. However, olfactory subtest results failed to improve the prediction of diabetes risk based on metabolomics data, and metabolomics data did not improve the prediction of the olfactory dysfunction category based on olfactory subtest results. Results of the present study suggest that olfactory function is not a useful predictor of diabetes.
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
DNA points accumulation for imaging in nanoscale topography (DNA-PAINT) is a super-resolution technique with relatively easy-to-implement multi-target imaging. However, image acquisition is slow as sufficient statistical data has to be generated from spatio-temporally isolated single emitters. Here, we trained the neural network (NN) DeepSTORM to predict fluorophore positions from high emitter density DNA-PAINT data. This achieves image acquisition in one minute. We demonstrate multi-color super-resolution imaging of structure-conserved semi-thin neuronal tissue and imaging of large samples. This improvement can be integrated into any single-molecule microscope and enables fast single-molecule super-resolution microscopy.
Electrocardiograms (ECG) record the heart activity and are the most common and reliable method to detect cardiac arrhythmias, such as atrial fibrillation (AFib). Lately, many commercially available devices such as smartwatches are offering ECG monitoring. Therefore, there is increasing demand for designing deep learning models with the perspective to be physically implemented on these small portable devices with limited energy supply. In this paper, a workflow for the design of small, energy-efficient recurrent convolutional neural network (RCNN) architecture for AFib detection is proposed. However, the approach can be well generalized to every type of long time series. In contrast to previous studies, that demand thousands of additional network neurons and millions of extra model parameters, the logical steps for the generation of a CNN with only 114 trainable parameters are described. The model consists of a small segmented CNN in combination with an optimal energy classifier. The architectural decisions are made by using the energy consumption as a metric in an equally important way as the accuracy. The optimisation steps are focused on the software which can be embedded afterwards on a physical chip. Finally, a comparison with some previous relevant studies suggests that the widely used huge CNNs for similar tasks are mostly redundant and unessentially computationally expensive.
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
Mathematical modeling of the molecular switch of TNFR1-mediated signaling pathways using Petri nets
(2021)
The paper describes a mathematical model of the molecular switch of cell survival, apoptosis, and necroptosis in cellular signaling pathways initiated by tumor necrosis factor 1. Based on experimental findings in the current literature, we constructed a Petri net model in terms of detailed molecular reactions for the molecular players, protein complexes, post-translational modifications, and cross talk. The model comprises 118 biochemical entities, 130 reactions, and 299 connecting edges. Applying Petri net analysis techniques, we found 279 pathways describing complete signal flows from receptor activation to cellular response, representing the combinatorial diversity of functional pathways.120 pathways steered the cell to survival, whereas 58 and 35 pathways led to apoptosis and necroptosis, respectively. For 65 pathways, the triggered response was not deterministic, leading to multiple possible outcomes. Based on the Petri net, we investigated the detailed in silico knockout behavior and identified important checkpoints of the TNFR1 signaling pathway in terms of ubiquitination within complex I and the gene expression dependent on NF-κB, which controls the caspase activity in complex II and apoptosis induction.
Consciousness transiently fades away during deep sleep, more stably under anesthesia, and sometimes permanently due to brain injury. The development of an index to quantify the level of consciousness across these different states is regarded as a key problem both in basic and clinical neuroscience. We argue that this problem is ill-defined since such an index would not exhaust all the relevant information about a given state of consciousness. While the level of consciousness can be taken to describe the actual brain state, a complete characterization should also include its potential behavior against external perturbations. We developed and analyzed whole-brain computational models to show that the stability of conscious states provides information complementary to their similarity to conscious wakefulness. Our work leads to a novel methodological framework to sort out different brain states by their stability and reversibility, and illustrates its usefulness to dissociate between physiological (sleep), pathological (brain-injured patients), and pharmacologically-induced (anesthesia) loss of consciousness.
The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
Nuclear pore complexes (NPCs) mediate nucleocytoplasmic transport. Their intricate 120 MDa architecture remains incompletely understood. Here, we report a near-complete structural model of the human NPC scaffold with explicit membrane and in multiple conformational states. We combined AI-based structure prediction with in situ and in cellulo cryo-electron tomography and integrative modeling. We show that linker Nups spatially organize the scaffold within and across subcomplexes to establish the higher-order structure. Microsecond-long molecular dynamics simulations suggest that the scaffold is not required to stabilize the inner and outer nuclear membrane fusion, but rather widens the central pore. Our work exemplifies how AI-based modeling can be integrated with in situ structural biology to understand subcellular architecture across spatial organization levels.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
The measurement of protein dynamics by proteomics to study cell remodeling has seen increased attention over the last years. This development is largely driven by a number of technological advances in proteomics methods. Pulsed stable isotope labeling in cell culture (SILAC) combined with tandem mass tag (TMT) labeling has evolved as a gold standard for profiling protein synthesis and degradation. While the experimental setup is similar to typical proteomics experiments, the data analysis proves more difficult: After peptide identification through search engines, data extraction requires either custom scripted pipelines or tedious manual table manipulations to extract the TMT-labeled heavy and light peaks of interest. To overcome this limitation, which deters researchers from using protein dynamic proteomics, we developed a user-friendly, browser-based application that allows easy and reproducible data analysis without the need for scripting experience. In addition, we provide a python package that can be implemented in established data analysis pipelines. We anticipate that this tool will ease data analysis and spark further research aimed at monitoring protein translation and degradation by proteomics.
Diminished sense of smell impairs the quality of life but olfactorily disabled people are hardly considered in measures of disability inclusion. We aimed to stratify perceptual characteristics and odors according to the extent to which they are perceived differently with reduced sense of smell, as a possible basis for creating olfactory experiences that are enjoyed in a similar way by subjects with normal or impaired olfactory function. In 146 subjects with normal or reduced olfactory function, perceptual characteristics (edibility, intensity, irritation, temperature, familiarity, hedonics, painfulness) were tested for four sets of 10 different odors each. Data were analyzed with (i) a projection based on principal component analysis and (ii) the training of a machine-learning algorithm in a 1000-fold cross-validated setting to distinguish between olfactory diagnosis based on odor property ratings. Both analytical approaches identified perceived intensity and familiarity with the odor as discriminating characteristics between olfactory diagnoses, while evoked pain sensation and perceived temperature were not discriminating, followed by edibility. Two disjoint sets of odors were identified, i.e., d = 4 “discriminating odors” with respect to olfactory diagnosis, including cis-3-hexenol, methyl salicylate, 1-butanol and cineole, and d = 7 “non-discriminating odors”, including benzyl acetate, heptanal, 4-ethyl-octanoic acid, methional, isobutyric acid, 4-decanolide and p-cresol. Different weightings of the perceptual properties of odors with normal or reduced sense of smell indicate possibilities to create sensory experiences such as food, meals or scents that by emphasizing trigeminal perceptions can be enjoyed by both normosmic and hyposmic individuals.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Sample-based longitudinal discrete choice experiments: preferences for electric vehicles over time
(2021)
Discrete choice experiments have emerged as the state-of-the-art method for measuring preferences, but they are mostly used in cross-sectional studies. In seeking to make them applicable for longitudinal studies, our study addresses two common challenges: working with different respondents and handling altering attributes. We propose a sample-based longitudinal discrete choice experiment in combination with a covariate-extended hierarchical Bayes logit estimator that allows one to test the statistical significance of changes. We showcase this method’s use in studies about preferences for electric vehicles over six years and empirically observe that preferences develop in an unpredictable, non-monotonous way. We also find that inspecting only the absolute differences in preferences between samples may result in misleading inferences. Moreover, surveying a new sample produced similar results as asking the same sample of respondents over time. Finally, we experimentally test how adding or removing an attribute affects preferences for the other attributes.
Correction to: Computational Economics https://doi.org/10.1007/s10614-020-10061-x
The original publication has been updated. In the original publication of this article, under the Introduction heading section, the corrections to the second paragraph’s inline equation were not incorporated. The author’s additional corrections have also been incorporated. The publisher apologizes for the error made during production.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
Modeling long-term neuronal dynamics may require running long-lasting simulations. Such simulations are computationally expensive, and therefore it is advantageous to use simplified models that sufficiently reproduce the real neuronal properties. Reducing the complexity of the neuronal dendritic tree is one option. Therefore, we have developed a new reduced-morphology model of the rat CA1 pyramidal cell which retains major dendritic branch classes. To validate our model with experimental data, we used HippoUnit, a recently established standardized test suite for CA1 pyramidal cell models. The HippoUnit allowed us to systematically evaluate the somatic and dendritic properties of the model and compare them to models publicly available in the ModelDB database. Our model reproduced (1) somatic spiking properties, (2) somatic depolarization block, (3) EPSP attenuation, (4) action potential backpropagation, and (5) synaptic integration at oblique dendrites of CA1 neurons. The overall performance of the model in these tests achieved higher biological accuracy compared to other tested models. We conclude that, due to its realistic biophysics and low morphological complexity, our model captures key physiological features of CA1 pyramidal neurons and shortens computational time, respectively. Thus, the validated reduced-morphology model can be used for computationally demanding simulations as a substitute for more complex models.
Korrektur zu: Höllbacher, S., Wittum, G. Correction to: A sharp interface method using enriched finite elements for elliptic interface problems. Numer. Math. 147, 783 (2021). DOI: 10.1007/s00211-021-01180-0.
We present an immersed boundary method for the solution of elliptic interface problems with discontinuous coefficients which provides a second-order approximation of the solution. The proposed method can be categorised as an extended or enriched finite element method. In contrast to other extended FEM approaches, the new shape functions get projected in order to satisfy the Kronecker-delta property with respect to the interface. The resulting combination of projection and restriction was already derived in Höllbacher and Wittum (TBA, 2019a) for application to particulate flows. The crucial benefits are the preservation of the symmetry and positive definiteness of the continuous bilinear operator. Besides, no additional stabilisation terms are necessary. Furthermore, since our enrichment can be interpreted as adaptive mesh refinement, the standard integration schemes can be applied on the cut elements. Finally, small cut elements do not impair the condition of the scheme and we propose a simple procedure to ensure good conditioning independent of the location of the interface. The stability and convergence of the solution will be proven and the numerical tests demonstrate optimal order of convergence.
In der Arbeit wird das Certainty-Tool, eine Erweiterung für den Unity-basierten Teil des Stolperwege Projektes, vorgestellt. Dieses verfolgt die Idee des VAnnotatoR weiter und erlaubt die Visualisierung von informationeller Ungewissheit der im Stolperwege-Praktikum digital rekonstruierten Gebäude. Dabei inkorporiert das Tool das Konzept hinter BIM (Building Information Modelling), eine neuartige Methode der Planung in der AEC-Branche, welches ein Selbstbewusstsein von Informationen für Teile eines Gebäudes ermöglicht. Dabei werden im Certainty-Tool Stufen der informationellen Ungewissheit entwickelt und diese auf Teile des Gebäudes zugewiesen. Das Tool wird anhand einer digitalen Rekonstruktion des zerstörten Rothschild-Palais vorgestellt. Des Weiteren wurde eine Evaluation basierend auf der Usability Metric for User Experience durchgeführt und weiterführende Entwicklungen und Verbesserungen des Tools diskutiert.
Reactive oxygen species are a class of naturally occurring, highly reactive molecules that change the structure and function of macromolecules. This can often lead to irreversible intracellular damage. Conversely, they can also cause reversible changes through post-translational modification of proteins which are utilized in the cell for signaling. Most of these modifications occur on specific cysteines. Which structural and physicochemical features contribute to the sensitivity of cysteines to redox modification is currently unclear. Here, I investigated the in uence of protein structural and sequence features on the modifiability of proteins and specific cysteines therein using statistical and machine learning methods. I found several strong structural predictors for redox modification, such as a higher accessibility to the cytosol and a high number of positively charged amino acids in the close vicinity. I detected a high frequency of other post-translational modifications, such as phosphorylation and ubiquitination, near modified cysteines. Distribution of secondary structure elements appears to play a major role in the modifiability of proteins. Utilizing these features, I created models to predict the presence of redox modifiable cysteines in proteins, including human mitochondrial complex I, NKG2E natural killer cell receptors and proximal tubule cell proteins, and compared some of these predictions to earlier experimental results.
This thesis concerns three specific constraint satisfaction problems: the k-SAT problem, random linear equations and the Potts model. We investigated a phenomenon called replica symmetry, its consequences and its limitation. For the $k$-SAT problem, we were able to show that replica symmetry holds up to a threshold $d^{*}$. However, after another critical threshold $d^{**}$, we discovered that replica symmetry could not hold anymore, which enabled us to establish the existence of a replica symmetry breaking region. For the random linear problem, a peculiar phenomenon occurs. We observed that a more robust version of replica symmetry (strong replica symmetry) holds up to a threshold $d=e$ and ceases to hold after. This phenomenon is linked to the fact that before the threshold $d=e$, the fraction of frozen variables, i.e. variable forced to take the same value in all solutions, is concentrated around a deterministic value but vacillates between two values with equal probability for $d>e$. Lastly, for the Potts model, we show that a phenomenon called metastability occurs. The latter phenomenon can be understood as a consequence of trivial replica symmetry breaking scheme. This metastability phenomenon further produces slow mixing results for two famous Markov chains, the Glauber and the Swendsen-Wang dynamics.
When performing transfer learning in Computer Vision, normally a pretrained model (source model) that is trained on a specific task and a large dataset like ImageNet is used. The learned representation of that source model is then used to perform a transfer to a target task. Performing transfer learning in this way had a great impact on Computer Vision, because it worked seamlessly, especially on tasks that are related to each other. Current research topics have investigated the relationship between different tasks and their impact on transfer learning by developing similarity methods. These similarity methods have in common, to do transfer learning without actually doing transfer learning in the first place but rather by predicting transfer learning rankings so that the best possible source model can be selected from a range of different source models. However, these methods have focused only on singlesource transfers and have not paid attention to multi-source transfers. Multi-source transfers promise even better results than single-source transfers as they combine information from multiple source tasks, all of which are useful to the target task. We fill this gap and propose a many-to-one task similarity method called MOTS that predicts both, single-source transfers and multi-source transfers to a specific target task. We do that by using linear regression and the source representations of the source models to predict the target representation. We show that we achieve at least results on par with related state-of-the-art methods when only focusing on singlesource transfers using the Pascal VOC and Taskonomy benchmark. We show that we even outperform all of them when using single and multi-source transfers together (0.9 vs. 0.8) on the Taskonomy benchmark. We additionally investigate the performance of MOTS in conjunction with a multi-task learning architecture. The task-decoder heads of a multi-task learning architecture are used in different variations to do multi-source transfers since it promises efficiency over multiple singletask architectures and incurs less computational cost. Results show that our proposed method accurately predicts transfer learning rankings on the NYUD dataset and even shows the best transfer learning results always being achieved when using more than one source task. Additionally, it is further examined that even just using one task-decoder head from the multi-task learning architecture promises better transfer learning results, than using a single-task architecture for the same task, which is due to the shared information from different tasks in the multi-task learning architecture in previous layers. Since the MOTS rankings for selecting the MTI-Net task-decoder head with the highest transfer learning performance were very accurate for the NYUD but not satisfying for the Pascal VOC dataset, further experiments need to varify the generalizability of MOTS rankings for the selection of the optimal task-decoder head from a multi-task architecture.
Solving High-Dimensional Dynamic Portfolio Choice Models with Hierarchical B-Splines on Sparse Grids
(2021)
Discrete time dynamic programming to solve dynamic portfolio choice models has three immanent issues: firstly, the curse of dimensionality prohibits more than a handful of continuous states. Secondly, in higher dimensions, even regular sparse grid discretizations need too many grid points for sufficiently accurate approximations of the value function. Thirdly, the models usually require continuous control variables, and hence gradient-based optimization with smooth approximations of the value function is necessary to obtain accurate solutions to the optimization problem. For the first time, we enable accurate and fast numerical solutions with gradient-based optimization while still allowing for spatial adaptivity using hierarchical B-splines on sparse grids. When compared to the standard linear bases on sparse grids or finite difference approximations of the gradient, our approach saves an order of magnitude in total computational complexity for a representative dynamic portfolio choice model with varying state space dimensionality, stochastic sample space, and choice variables.
Gegenstände und Gesichter können Computer schon recht gut erkennen, auch dass sich etwas bewegt und in welche Richtung. Schwierigkeiten bereiten der Künstlichen Intelligenz aber noch zu erfassen, um welche Art von Bewegungen es sich handelt. Das lernen Computer jetzt im Labor von Prof. Hilde Kühne an der Goethe-Universität.
The present research in high energy physics as well as in the nuclear physics requires the use of more powerful and complex particle accelerators to provide high luminosity, high intensity, and high brightness beams to experiments. With the increased technological complexity of accelerators, meeting the demand of experimenters necessitates a blend of accelerator physics with technology. The problem becomes severe when optimization of beam quality has to be provided in accelerator systems with thousands of free parameters including strengths of quadrupoles, sextupoles, RF voltages, etc. Machine learning methods and concepts of artificial intelligence are considered in various industry and scientific branches, and recently, these methods are used in high energy physics mainly for experiments data analysis.
In Accelerator Physics the machine learning approach has not found a wide application yet, and in general the use of these methods is carried out without a deep understanding on their effectiveness with respect to more traditional schemes or other alternative approaches. The purpose of this PhD research is to investigate the methods of machine learning applied to accelerator optimization, accelerator control and in particular on optics measurements and corrections. Optics correction, maximization of acceptance, and simultaneous control of various accelerator components such as focusing magnets is a typical accelerator scenario. The effectiven- ess of machine learning methods in a complex system such as the Large Hadron Collider, which beam dynamics exhibits nonlinear response to machine settings is the core of the study. This work presents successful application of several machine learning techniques such as clustering, decision trees, linear multivariate models and neural networks to beam optics measurements and corrections at the LHC, providing the guidelines for incorporation of machine learning techniques into accelerator operation and discussing future opportunities and potential work in this field.
Ein aktuelles Forschungsthema ist die automatische Generierung von 3D-Szenen ausgehend von Beschreibungen in natürlicher Sprache. S.g. Text2Scene-Anwendungen sollen Objekte und räumliche Relationen in einer Texteingabe identifizieren und mit 3D-Modellen eine visuelle Repräsentation der Beschreibung konstruieren. Bisherige Ansätze kombinieren eine
stichwortbasierte Erkennung von explizit gemachten Angaben mit vorher gelerntem Allgemeinwissen über die sinnvolle Anordnung von Objekten. Den Anwendungen fehlt jedoch ein tiefergehendes Verständnis von räumlicher Sprache.
Mit dem Annotationsschema ISOSpace können Texte mit detaillierten räumlichen Informationen angereichert und so für NLP-Anwendungen verständlicher gemacht werden. Bereits in einer früheren Arbeit wurde der SemAF-Annotator zum Erstellen von ISOSpaceAnnotationen als Modul für den TextAnnotator entwickelt. In dieser Arbeit wurde der SemAF-Annotator zusätzlich um eine Funktionalität zur Szenenerstellung erweitert: Benutzer können einzelnen Wörtern in der Weboberfläche des TextAnnotators Objekte aus dem ShapeNet Datensatz zuordnen und diese in einer zweidimensionalen Darstellung einer Szene räumlich anordnen. Trotz einiger Einschränkungen durch die fehlende dritte Dimension lassen sich in vielen Fällen gute Ergebnisse erzielen. Die auf diese Weise erzeugten Szenen sollen später in Kombination mit den ISOSpace-Annotionen verwendet werden, um Text2SceneAnwendungen zu entwickeln, die ein umfassenderes räumliches Verständnis aufweisen.
Kleinere Nebenaufgaben dieser Arbeit waren die Erweiterung des SemAF-Annotators um zusätzliche Annotationstypen sowie diverse Nachbesserungen der bereits bestehenden Funktionalität zur ISOSpace Annotation.
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
The mobile games business is an ever-increasing sub-sector of the entertainment industry. Due to its high profitability but also high risk and competitive atmosphere, game publishers need to develop strategies that allow them to release new products at a high rate, but without compromising the already short lifespan of the firms' existing games. Successful game publishers must enlarge their user base by continually releasing new and entertaining games, while simultaneously motivating the current user base of existing games to remain active for more extended periods. Since the core-component reuse strategy has proven successful in other software products, this study investigates the advantages and drawbacks of this strategy in mobile games. Drawing on the widely accepted Product Life Cycle concept, the study investigates whether the introduction of a new mobile game built with core-components of an existing mobile game curtails the incumbent's product life cycle. Based on real and granular data on the gaming activity of a popular mobile game, the authors find that by promoting multi-homing (i.e., by smartly interlinking the incumbent and new product with each other so that users start consuming both games in parallel), the core-component reuse strategy can prolong the lifespan of the incumbent game.
When we browse via WiFi on our laptop or mobile phone, we receive data over a noisy channel. The received message may differ from the one that was sent originally. Luckily it is often possible to reconstruct the original message but it may take a lot of time. That’s because decoding the received message is a complex problem, NP-hard to be exact. As we continue browsing, new information is sent to us in a high frequency. So if lags are to be avoided and as memory is finite, there is not much time left for decoding. Coding theory tackles this problem by creating models of the channels we use to communicate and tailor codes based on the channel properties. A well known family of codes are Low-Density Parity-Check codes (LDPC codes), they are widely used in standards like WiFi and DVB-T2. In practical settings the complexity of decoding a received message can be heavily reduced by using LDPC codes and approximative decoding algorithms. This thesis lays out the basic construction of LDPC codes and a proper decoding using the sum-product algorithm. On this basis a neural network to improve decoding is introduced. Therefore the sum-product algorithm is transformed into a neural network decoder. This approach was first presented by Nachmani et al. and treated in detail by Navneet Agrawal in 2017. To find out how machine learning can improve the codes, the bit error rates of the trained neural network decoder are compared with the bit error rates of the classic sum-product algorithm approach. Experiments with static and dynamic training datasets of diverse sizes, various signal-to-noise ratios, a feed forward as well as a recurrent architecture show how to tune the neural network decoder even further. Results of the experiments are used to verify statements made in Agrawal’s work. In addition, corrections and improvements in the area of metrics are presented. An implementation of the neural network to facilitate access for others will be made available to the public.
Digital distractions can interfere with goal attainment and lead to undesirable habits that are hard to get red rid of. Various digital self-control interventions promise support to alleviate the negative impact of digital distractions. These interventions use different approaches, such as the blocking of apps and websites, goal setting, or visualizations of device usage statistics. While many apps and browser extensions make use of these features, little is known about their effectiveness. This systematic review synthesizes the current research to provide insights into the effectiveness of the different kinds of interventions. From a search of the ‘ACM’, ‘Springer Link’, ‘Web of Science’, ’IEEE Xplore’ and ‘Pubmed’ databases, we identified 28 digital self-control interventions. We categorized these interventions according to their features and their outcomes. The interventions showed varying degrees of effectiveness, and especially interventions that relied purely on increasing the participants' awareness were barely effective. For those interventions that sanctioned the use of distractions, the current literature indicates that the sanctions have to be sufficiently difficult to overcome, as they will otherwise be quickly dismissed. The overall confidence in the results is low, with small sample sizes, short study duration, and unclear study contexts. From these insights, we highlight research gaps and close with suggestions for future research.
In dieser Arbeit werden 4,6 Millionen englische Tweets, welche das Keyword „Bitcoin“ enthalten, analysiert und der Zusammenhang zwischen dem Sentiment der Tweets und den Renditen des Bitcoin untersucht. Zur Bestimmung der Sentiment-Klassen werden Text-Klassifizierer mit verschiedenen Ansätzen, darunter auch auf Convolutional Neural Networks und Transformern basierende Modelle, in diesem Kontext evaluiert und optimiert. Es wird außerdem ein Meta-Modell konstruiert, welches beim Problem der Sentiment-Klassifikation von Tweets in drei Klassen {Positiv, Negativ, Neutral} in der betrachteten Domäne besser abschneidet, als die anderen begutachteten Modelle. Bezüglich des Zusammenhangs wird im Speziellen auch der Einfluss von Merkmalen der Tweets und ihrer Verfassern anhand der Distanzkorrelation untersucht.
Consequences of minimal length discretization on line element, metric tensor, and geodesic equation
(2021)
When minimal length uncertainty emerging from a generalized uncertainty principle (GUP) is thoughtfully implemented, it is of great interest to consider its impacts on gravitational Einstein field equations (gEFEs) and to try to assess consequential modifications in metric manifesting properties of quantum geometry due to quantum gravity. GUP takes into account the gravitational impacts on the noncommutation relations of length (distance) and momentum operators or time and energy operators and so on. On the other hand, gEFE relates classical geometry or general relativity gravity to the energy–momentum tensors, that is, proposing quantum equations of state. Despite the technical difficulties, we intend to insert GUP into the metric tensor so that the line element and the geodesic equation in flat and curved space are accordingly modified. The latter apparently encompasses acceleration, jerk, and snap (jounce) of a particle in the quasi-quantized gravitational field. Finite higher orders of acceleration apparently manifest phenomena such as accelerating expansion and transitions between different radii of curvature and so on.
Our purpose was to analyze the robustness and reproducibility of magnetic resonance imaging (MRI) radiomic features. We constructed a multi-object fruit phantom to perform MRI acquisition as scan-rescan using a 3 Tesla MRI scanner. We applied T2-weighted (T2w) half-Fourier acquisition single-shot turbo spin-echo (HASTE), T2w turbo spin-echo (TSE), T2w fluid-attenuated inversion recovery (FLAIR), T2 map and T1-weighted (T1w) TSE. Images were resampled to isotropic voxels. Fruits were segmented. The workflow was repeated by a second reader and the first reader after a pause of one month. We applied PyRadiomics to extract 107 radiomic features per fruit and sequence from seven feature classes. We calculated concordance correlation coefficients (CCC) and dynamic range (DR) to obtain measurements of feature robustness. Intraclass correlation coefficient (ICC) was calculated to assess intra- and inter-observer reproducibility. We calculated Gini scores to test the pairwise discriminative power specific for the features and MRI sequences. We depict Bland Altmann plots of features with top discriminative power (Mann–Whitney U test). Shape features were the most robust feature class. T2 map was the most robust imaging technique (robust features (rf), n = 84). HASTE sequence led to the least amount of rf (n = 20). Intra-observer ICC was excellent (≥ 0.75) for nearly all features (max–min; 99.1–97.2%). Deterioration of ICC values was seen in the inter-observer analyses (max–min; 88.7–81.1%). Complete robustness across all sequences was found for 8 features. Shape features and T2 map yielded the highest pairwise discriminative performance. Radiomics validity depends on the MRI sequence and feature class. T2 map seems to be the most promising imaging technique with the highest feature robustness, high intra-/inter-observer reproducibility and most promising discriminative power.
An exploratory latent class analysis of student expectations towards learning analytics services
(2021)
For service implementations to be widely adopted, it is necessary for the expectations of the key stakeholders to be considered. Failure to do so may lead to services reflecting ideological gaps, which will inadvertently create dissatisfaction among its users. Learning analytics research has begun to recognise the importance of understanding the student perspective towards the services that could be potentially offered; however, student engagement remains low. Furthermore, there has been no attempt to explore whether students can be segmented into different groups based on their expectations towards learning analytics services. In doing so, it allows for a greater understanding of what is and is not expected from learning analytics services within a sample of students. The current exploratory work addresses this limitation by using the three-step approach to latent class analysis to understand whether student expectations of learning analytics services can clearly be segmented, using self-report data obtained from a sample of students at an Open University in the Netherlands. The findings show that student expectations regarding ethical and privacy elements of a learning analytics service are consistent across all groups; however, those expectations of service features are quite variable. These results are discussed in relation to previous work on student stakeholder perspectives, policy development, and the European General Data Protection Regulation (GDPR).
Szenen automatisch aus Texten generieren zu können ist eine interessante Aufgabe der Informatik. Für diese Aufgabe wurde VANNOTATOR (Mehler und Abrami 2019, Abrami, Spiekermann und Mehler 2019, Spiekermann, Abrami und Mehler 2018) entwickelt, ein Framework, das die Beschreibung bzw. Beschriftung von VR-Szenen ermöglicht. Damit für diese Szenen die benötigten 3D-Objekte bereitgestellt werden können, sind entsprechende Datenbanken vonnöten. Diese Datenbanken müssen umfangreich annotiert sein, damit diese Aufgabe bewältigt werden kann. Deshalb wurde im Falle des VANNOTATORs auf die ShapeNetSem Datenbank zurückgegriffen (Abrami, Henlein, Kett u. a. 2020).
Je detailreicher eine Szene dargestellt wird, desto detailreicher kann diese auch durch einen Text beschrieben werden. Aus diesem Grund wird die Datenbank um einen Teilbereich von PartNet (Mo u. a. 2019) erweitert. Dieser erlaubt die Option, Objekte zu segmentieren, und erweitert hierdurch das annotierbare Vokabular. Manche der bereits vorhandenen ShapeNetSem-Objekte verfügen über die Eigenschaft, dass sie auch PartNet-Objekte sind. Diese Arbeit befasst sich mit der Umsetzung, wie ShapeNetSem-Objekte mit hinterlegten PartNetObjekten durch diese ersetzt werden können. Um das zu bewerkstelligen, wurde ein Panel entworfen, in welchem ein PartNet-Objekt mit samt seinen einzelnen Segmenten aufgeführt wird. Diese Segmente können nun wie ShapeNetSem-Objekte ausgewählt und in einer Szene platziert werden. Dadurch werden 1.881 Objekte mit wiederum 34.016 Unterobjekten VANNOTATOR zur Verfügung gestellt. Dieses vergrößerte Vokabular hilft Natural Language Processing noch effektiver und präziser voranzutreiben.
Abstract: The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
Author Summary: Human visual perception is a complex cognitive feat known to be mediated by distinct cortical regions of the brain. However, the exact function of these regions remains unknown, and thus it remains unclear how those regions together orchestrate visual perception. Here, we apply an AI-driven brain mapping approach to reveal visual brain function. This approach integrates multiple artificial deep neural networks trained on a diverse set of functions with functional recordings of the whole human brain. Our results reveal a systematic tiling of visual cortex by mapping regions to particular functions of the deep networks. Together this constitutes a comprehensive account of the functions of the distinct cortical regions of the brain that mediate human visual perception.
The main topic of the present thesis is scene flow estimation in a monocular camera system. Scene flow describes the joint representation of 3D positions and motions of the scene. A special focus is placed on approaches that combine two kinds of information, deep-learning-based single-view depth estimation and model-based multi-view geometry.
The first part addresses single-view depth estimation focussing on a method that provides single-view depth information in an advantageous form for monocular scene flow estimation methods. A convolutional neural network, called ProbDepthNet, is proposed, which provides pixel-wise well-calibrated depth distributions. The experiments show that different strategies for quantifying the measurement uncertainty provide overconfident estimates due to overfitting effects. Therefore, a novel recalibration technique is integrated as part of the ProbDepthNet, which is validated to improve the calibration of the uncertainty measures. The monocular scene flow methods presented in the subsequent parts confirm that the integration of single-view depth information results in the best performance if the neural network provides depth distributions instead of single depth values and contains a recalibration.
Three methods for monocular scene flow estimation are presented, each one designed to combine multi-view geometry-based optimization with deep learning-based single-view depth estimation such as ProbDepthNet. While the first method, SVD-MSfM, performs the motion and depth estimation as two subsequent steps, the second method, Mono-SF, jointly optimizes the motion estimates and the depth structure. Both methods are tailored to address scenes, where the objects and motions can be represented by a set of rigid bodies. Dynamic traffic scenes are one kind of scenes that essentially fulfill this characteristic. The method, Mono-Stixel, uses an even more specialized scene model for traffic scenes, called stixel world, as underlying scene representation.
The proposed methods provide new state of the art for monocular scene flow estimation with Mono-SF being the first and leading monocular method on the KITTI scene flow benchmark at the time of submission of the present thesis. The experiments validate that both kind of information, the multi-view geometric optimization and the single-view depth estimates, contribute to the monocular scene flow estimates and are necessary to achieve the new state of the art accuracy.
Contemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.
Die folgende Arbeit handelt von einer Text2Scene Anwendung, welche in der Virtual Reality (VR) umgesetzt wurde. Das System ermöglicht es den Usern aus einer Beschreibung einer Szene, diese virtuell nachzustellen. Dies bietet eine neue Art der Interaktion mit einem Text, die die visuelle Komponente hervorhebt und somit eine Geschichte auf neue Wege erfahrbar macht.
Dazu kann der User einen fertigen Text entweder vom Server zu laden oder einen eigenen erstellen, der dann automatisch verarbeitet wird. Dabei werden die vorhanden physischen Objekte im Text automatisch erkannt und dem User als 3D-Objekte in der virtuellen Umgebung zur Verfügung gestellt. Diese können dann manuell platziert werden und erzeugen dadurch die Szene, die im Ausgangstext beschrieben wurde. Das Ziel der Textverarbeitung ist eine möglichst genaue Beschreibung der Objekte, damit diese zielgerichtet in der Objektdatenbank gesucht werden können.
Bei der Textverarbeitung wird besonderer Wert auf das Erkennen von Teil-Ganz Beziehungen gelegt. Sodass Objekte, die im Text vorkommen und ein Holonym besitzen, automatisch mit diesem verknüpft werden. Gleichzeitig wird die Teil-Ganz Beziehung aber auch in die andere Richtung genauer betrachtet. Die Textverarbeitung soll ferner dazu in der Lage sein, Objekte genauer zu spezifizieren und an den Kontext des Textes anzupassen. Weiterhin wurde das Natural Language Processing (NLP) so ausgebaut, dass der Kontext des Textes erkannt wird und die Objekte entsprechend kategorisiert werden. Die Textverarbeitung wird mithilfe eines Neuronalen Netzes implementiert. Die verwendeten Tools zur Erkennung von Teil-Ganz Beziehungen, Kontext und Spezifikation von Objekten wurden anhand von Texteingaben nach der Genauigkeit der Ausgabe evaluiert.
Zur Nutzung der Textverarbeitung wurde eine virtuelle Szene entwickelt, die das Erstellen von eigenen Szenen aus vorher geladenen beziehungsweise eingegebenen Texten ermöglicht.
Dazu kann der Nutzer manuell oder automatisch Objekte laden lassen, die er dann platzieren kann.
Analysing survival or fixation probabilities for a beneficial allele is a prominent task in the field of theoretical population genetics. Haldane's asymptotics is an approximation for the fixation probability in the case of a single beneficial mutant with small selective advantage in a large population.
In this thesis we analyse the interplay between genetic drift and directional selection and prove Haldane's asymptotics in different settings: For the fixation probability in Cannings models with moderate selection and for the survival probability of a slightly supercritical branching processes in a random environment.
In Chapter 3 we introduce a class of Cannings models with selection that allow for a forward and backward construction. In particular, a Cannings ancestral selection process can be defined for this class of models, which counts the number of potential parents and is in sampling duality to the forward frequency process. By means of this duality the probability of fixation can be expressed through the expectation of the Cannings ancestral selection process in stationarity. A control of this expectation yields that the fixation probability fulfils Haldane's asymptotics in a regime of moderately weak selection (Thm. 8).
In Chapter 4 we study the fixation probability of Cannings models in a regime of moderately strong selection. Here couplings of the frequency process of beneficial individuals with slightly supercritical Galton-Watson processes imply that the fixation probability is given by Haldane's asymptotics (Thm. 9).
Lastly, in Chapter 5 we consider slightly supercritical branching processes in an independent and identically distributed random environment and study the probability of survival as the number of expected offspring tends from above to one. We show that only if variance and expectation of the random offspring mean are of the same order the random environment has a non-trivial influence on the probability of survival, which results in a modification of Haldane's asymptotics. Out of the critical parameter regime the population goes extinct or survives with a probability that fulfils Haldane's asymptotics (Thm. 10).
The proof establishes an expression for the survival probability in terms of the shape function of the random offspring generating functions. This expression exhibits similarities to perpetuities known from a financial context. Consequently, we prove a limiting theorem for perpetuities with vanishing interest rates (Thm. 11).
This study explores how ‘gatherings’ turn into ‘encounters’ in a virtual world (VW) context. Most communication technologies enable only focused encounters between distributed participants, but in VWs both gatherings and encounters can occur. We present close sequential analysis of moments when after a silent gathering, interaction among participants in a VW is gradually resumed, and also investigate the social actions in the verbal (re-)opening turns. Our findings show that like in face-to-face situations, also in VWs participants often use different types of embodied resources to achieve the transition, rather than rely on verbal means only. However, the transition process in VWs has distinctive characteristics compared to the one in face-to-face situations. We discuss how participants in a VW use virtually embodied pre-beginnings to display what we call encounter-readiness, instead of displaying lack of presence by avatar stillness. The data comprise 40 episodes of video-recorded team interactions in a VW.
The paper presents research results emerging from the analysis of Intelligent Personal Assistants (IPA) log data. Based on the assumption that media and data, as part of practice, are produced and used cooperatively, the paper discusses how IPA log data can be used to analyze (1) how the IPA systems operate through their connection to platforms and infrastructures, (2) how the dialog systems are designed today and (3) how users integrate them into their everyday social interaction. It also asks in which everyday practical contexts the IPA are placed on the system side and on the user side, and how privacy issues in particular are negotiated. It is argued that, in order to be able to investigate these questions, the technical-institutional and the cultural-theoretical perspective on media, which is common in German media linguistics, has to be complemented by a more fundamental, i.e. social-theoretical and interactionist perspective.