Institutes
Refine
Year of publication
- 2021 (27) (remove)
Document Type
- Article (27) (remove)
Language
- English (27)
Has Fulltext
- yes (27)
Is part of the Bibliography
- no (27)
Keywords
- 2-SAT (1)
- Anemia management (1)
- Annotation (1)
- Artificial intelligence (1)
- Augmented reality (1)
- BIOfid (1)
- Belief Propagation (1)
- Biodiversity (1)
- Blood loss calculator (1)
- Blood loss formula (1)
Institute
We consider a linear ill-posed equation in the Hilbert space setting. Multiple independent unbiased measurements of the right-hand side are available. A natural approach is to take the average of the measurements as an approximation of the right-hand side and to estimate the data error as the inverse of the square root of the number of measurements. We calculate the optimal convergence rate (as the number of measurements tends to infinity) under classical source conditions and introduce a modified discrepancy principle, which asymptotically attains this rate.
AttendAffectNet-emotion prediction of movie viewers using multimodal fusion with self-attention
(2021)
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension.
The present paper is concerned with the half-space Dirichlet problem [...] where ℝ𝑁+:={𝑥∈ℝ𝑁:𝑥𝑁>0} for some 𝑁≥1 and 𝑝>1, 𝑐>0 are constants. We analyse the existence, non-existence and multiplicity of bounded positive solutions to (𝑃𝑐). We prove that the existence and multiplicity of bounded positive solutions to (𝑃𝑐) depend in a striking way on the value of 𝑐>0 and also on the dimension N. We find an explicit number 𝑐𝑝∈(1,𝑒√), depending only on p, which determines the threshold between existence and non-existence. In particular, in dimensions 𝑁≥2, we prove that, for 0<𝑐<𝑐𝑝, problem (𝑃𝑐) admits infinitely many bounded positive solutions, whereas, for 𝑐>𝑐𝑝, there are no bounded positive solutions to (𝑃𝑐).
Nowadays, digitalization has an immense impact on the landscape of jobs. This technological revolution creates new industries and professions, promises greater efficiency and improves the quality of working life. However, emerging technologies such as robotics and artificial intelligence (AI) are reducing human intervention, thus advancing automation and eliminating thousands of jobs and whole occupational images. To prepare employees for the changing demands of work, adequate and timely training of the workforce and real-time support of workers in new positions is necessary. Therefore, it is investigated whether user-oriented technologies, such as augmented reality (AR) and virtual reality (VR) can be applied “on-the-job” for such training and support—also known as intelligence augmentation (IA). To address this problem, this work synthesizes results of a systematic literature review as well as a practically oriented search on augmented reality and virtual reality use cases within the IA context. A total of 150 papers and use cases are analyzed to identify suitable areas of application in which it is possible to enhance employees' capabilities. The results of both, theoretical and practical work, show that VR is primarily used to train employees without prior knowledge, whereas AR is used to expand the scope of competence of individuals in their field of expertise while on the job. Based on these results, a framework is derived which provides practitioners with guidelines as to how AR or VR can support workers at their job so that they can keep up with anticipated skill demands. Furthermore, it shows for which application areas AR or VR can provide workers with sufficient training to learn new job tasks. By that, this research provides practical recommendations in order to accompany the imminent distortions caused by AI and similar technologies and to alleviate associated negative effects on the German labor market.
Background: The ability to approximate intra-operative hemoglobin loss with reasonable precision and linearity is prerequisite for determination of a relevant surgical outcome parameter: This information enables comparison of surgical procedures between different techniques, surgeons or hospitals, and supports anticipation of transfusion needs. Different formulas have been proposed, but none of them were validated for accuracy, precision and linearity against a cohort with precisely measured hemoglobin loss and, possibly for that reason, neither has established itself as gold standard. We sought to identify the minimal dataset needed to generate reasonably precise and accurate hemoglobin loss prediction tools and to derive and validate an estimation formula.
Methods: Routinely available clinical and laboratory data from a cohort of 401 healthy individuals with controlled hemoglobin loss between 29 and 233 g were extracted from medical charts. Supervised learning algorithms were applied to identify a minimal data set and to generate and validate a formula for calculation of hemoglobin loss.
Results: Of the classical supervised learning algorithms applied, the linear and Ridge regression models performed at least as well as the more complex models. Most straightforward to analyze and check for robustness, we proceeded with linear regression. Weight, height, sex and hemoglobin concentration before and on the morning after the intervention were sufficient to generate a formula for estimation of hemoglobin loss. The resulting model yields an outstanding R2 of 53.2% with similar precision throughout the entire range of volumes or donor sizes, thereby meaningfully outperforming previously proposed medical models.
Conclusions: The resulting formula will allow objective benchmarking of surgical blood loss, enabling informed decision making as to the need for pre-operative type-and-cross only vs. reservation of packed red cell units, depending on a patient’s anemia tolerance, and thus contributing to resource management.
We establish weighted Lp-Fourier extension estimates for O(N−k)×O(k)-invariant functions defined on the unit sphere SN−1, allowing for exponents p below the Stein–Tomas critical exponent 2(N+1)/N−1. Moreover, in the more general setting of an arbitrary closed subgroup G⊂O(N) and G-invariant functions, we study the implications of weighted Fourier extension estimates with regard to boundedness and nonvanishing properties of the corresponding weighted Helmholtz resolvent operator. Finally, we use these properties to derive new existence results for G-invariant solutions to the nonlinear Helmholtz equation −Δu−u = Q(x)|u|p−2u,u∈W2,p(RN), where Q is a nonnegative bounded and G-invariant weight function.
In this survey paper, we present a multiscale post-processing method in exploration. Based on a physically relevant mollifier technique involving the elasto-oscillatory Cauchy–Navier equation, we mathematically describe the extractable information within 3D geological models obtained by migration as is commonly used for geophysical exploration purposes. More explicitly, the developed multiscale approach extracts and visualizes structural features inherently available in signature bands of certain geological formations such as aquifers, salt domes etc. by specifying suitable wavelet bands.
We prove new existence results for a nonlinear Helmholtz equation with sign-changing nonlinearity of the form − delta u−k2u=Q(x)/u/p−2u, uEW2, p(RN) – delta u − k2u=Q(x)/u/p−2u, uEW2, p(RN) with k>0, k>0, N≥3N≥3, pE[2(N+1)N − 1, 2NN − 2)pE[2(N+1)N − 1, 2NN−2) and QEL ∞ (RN)QEL ∞ (RN). Due to the sign-changes of Q, our solutions have infinite Morse-Index in the corresponding dual variational formulation.
The recently introduced Lipschitz–Killing curvature measures on pseudo-Riemannian manifolds satisfy a Weyl principle, i.e. are invariant under isometric embeddings. We show that they are uniquely characterized by this property. We apply this characterization to prove a Künneth-type formula for Lipschitz–Killing curvature measures, and to classify the invariant generalized valuations and curvature measures on all isotropic pseudo-Riemannian space forms.
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.