004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2021 (61) (remove)
Document Type
- Article (39)
- Doctoral Thesis (9)
- Preprint (8)
- Bachelor Thesis (2)
- Conference Proceeding (1)
- Master's Thesis (1)
- Report (1)
Language
- English (61) (remove)
Has Fulltext
- yes (61)
Is part of the Bibliography
- no (61)
Keywords
- artificial intelligence (3)
- machine learning (3)
- data science (2)
- healthcare (2)
- trustworthy AI (2)
- (re-)openings (1)
- 3D image analysis (1)
- AI fairness (1)
- Adaptive control (1)
- Adoption (1)
- Algorithmic fairness (1)
- Amblyopia (1)
- Annotation (1)
- Approximation Algorithms (1)
- Artificial intelligence (1)
- Automatic (1)
- BIOfid (1)
- Bayesian Persuasion (1)
- Binocular Rivalry (1)
- Biodiversity (1)
- Biophysical models (1)
- Browsertool (1)
- CBM detector (1)
- COGNIMUSE (1)
- Cognitive Maps (1)
- Cognitive Spatial Distortions (1)
- Collective cell migration (1)
- Complementary mobility services (1)
- Computational geometry (1)
- Computational models (1)
- Computational neuroscience (1)
- Computational science (1)
- Computer Vision (1)
- Computer science (1)
- Connected Components (1)
- Core-component reuse (1)
- Costs (1)
- Curse of dimensionality (1)
- Data Analysis (1)
- Degradation (1)
- Delegated Search (1)
- Diagnostic markers (1)
- Discrete choice experiment (1)
- Discrete time dynamic programming (1)
- Dual response (1)
- Dynamic portfolio choice (1)
- Electric vehicles (1)
- Experimental Evaluation (1)
- External Memory (1)
- Fair AI (1)
- Functional magnetic resonance imaging (1)
- Gradient-based optimization (1)
- Graph Algorithms (1)
- HL7 FHIR (1)
- Hierarchical B-splines (1)
- Higher education (1)
- Human factors (1)
- Individual differences (1)
- Inter-annotator agreement (1)
- Julia (1)
- LDPC Codes (1)
- Learning analytics (1)
- Light-sheet fluorescence microscopy (1)
- Line reconstruction (1)
- Linear regression analysis (1)
- Location-based games (1)
- Machine learning (1)
- MediaEval 2016 (1)
- Mobile games (1)
- Monocular Scene Flow (1)
- Named entity recognition (1)
- Neural Network (1)
- Neural networks (1)
- Noisy point clouds (1)
- Nutrition (1)
- Olfactory system (1)
- Online Algorithms (1)
- OpenStreetMap quality evaluation (1)
- Paramecium (1)
- Particle image velocimetry (1)
- Permutation (1)
- Plasticity (1)
- PointNet (1)
- Preclinical research (1)
- Predictive markers (1)
- Product life cycle (1)
- Prognostic markers (1)
- Proteomics (1)
- Python (1)
- RNA biology (1)
- RNA interference (1)
- Randomization (1)
- Sample-based longitudinal study (1)
- Semantic portal (1)
- Semantics (1)
- Sensory perception (1)
- Software (1)
- Spatially adaptive sparse grids (1)
- Specialized information service (1)
- Student expectations (1)
- TDOA (1)
- TMT (1)
- Taxon (1)
- Tobler's First Law (1)
- Traffic Scenes (1)
- Translation (1)
- Translational research (1)
- Tribolium castaneum (1)
- Vision (1)
- Visual cortex (1)
- Volunteered Geographic Information (1)
- Z-inspection (1)
- acoustic multilateration (1)
- affective computing (1)
- animal behavior states (1)
- animal detection (1)
- animal sounds (1)
- animal welfare (1)
- attack scenarios (1)
- automated monitoring (1)
- automotive sector (1)
- avatars (1)
- base stations (1)
- batteries (1)
- behavioral research (1)
- bioacoustics (1)
- cardiac arrest (1)
- case study (1)
- centrality (1)
- chatbots (1)
- clinical trials (1)
- cluster computing (1)
- co-located collaboration analytics (1)
- co-presence (1)
- coding theory (1)
- coincidence detection (1)
- collaboration (1)
- collaboration analytics (1)
- comparison (1)
- computer vision (1)
- conjoint analysis (1)
- consumer behavior (1)
- conversation analysis (1)
- convolutional neural networks (1)
- corpus study (1)
- data sharing (1)
- de-identification (1)
- debugging (1)
- deep learning (1)
- deep learning tools (1)
- dendrites (1)
- device-to-device communication (1)
- diabetes mellitus (1)
- digital distractions (1)
- digital medicine (1)
- disaster risk management (1)
- domains (1)
- ecology of savannah animals (1)
- economics (1)
- education (1)
- emotion prediction (1)
- encounter (1)
- epigenome (1)
- ethical co-design (1)
- ethical trade-off (1)
- ethics (1)
- explainability (1)
- explainable AI (1)
- field mapping (1)
- field papers (1)
- flood risk perception (1)
- flooding (1)
- gathering (1)
- generalized uncertainty principle (1)
- geodesic equation (1)
- group speech analytics (1)
- health information interoperability (1)
- heavy ion collisions (1)
- high performance computing (1)
- human olfaction (1)
- image classification (1)
- impact parameter (1)
- internet (1)
- interpretability (1)
- line element (1)
- literature review (1)
- machine-learning (1)
- macronucleus (1)
- malignant melanoma (1)
- media multitasking (1)
- metric tensor (1)
- mobile communication (1)
- multimodal fusion (1)
- multimodal interaction (1)
- multimodal learning analytics (1)
- network model (1)
- neural network decoder (1)
- neural networks (1)
- newspaper (1)
- non-invasive (1)
- noncommutative geometry (1)
- participation (1)
- patients (1)
- patient–doctor relationship (1)
- pedagogical roles (1)
- performance evaluation (1)
- plasticity (1)
- privacy preference (1)
- privacy setting (1)
- pulsed SILAC (1)
- pyramidal neuron (1)
- quantum gravity (1)
- receivers (1)
- relativity and gravitation (1)
- requirements analysis (1)
- self-attention (1)
- self-control (1)
- self-regulation (1)
- small RNA (1)
- sound localization (1)
- specialized vocabulary (1)
- sum-product algorithm (1)
- supervised learning (1)
- textbooks (1)
- threshold concepts (1)
- transition (1)
- trials registry (1)
- trust (1)
- trustworthy AI Co-design (1)
- user preferences (1)
- user study (1)
- virtual embodiment (1)
- virtual worlds (1)
- vocalization (1)
- wikipedia (1)
- willingness to forward (1)
- wireless communication (1)
- wireless networks (1)
Institute
- Informatik und Mathematik (18)
- Medizin (11)
- Informatik (10)
- Wirtschaftswissenschaften (10)
- Frankfurt Institute for Advanced Studies (FIAS) (6)
- Physik (6)
- Goethe-Zentrum für Wissenschaftliches Rechnen (G-CSC) (2)
- Biochemie und Chemie (1)
- Biowissenschaften (1)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (1)
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.
The sketch map tool facilitates the assessment of OpenStreetMap data for participatory mapping
(2021)
A worldwide increase in the number of people and areas affected by disasters has led to more and more approaches that focus on the integration of local knowledge into disaster risk reduction processes. The research at hand shows a method for formalizing this local knowledge via sketch maps in the context of flooding. The Sketch Map Tool enables not only the visualization of this local knowledge and analyses of OpenStreetMap data quality but also the communication of the results of these analyses in an understandable way. Since the tool will be open-source and several analyses are made automatically, the tool also offers a method for local governments in areas where historic data or financial means for flood mitigation are limited. Example analyses for two cities in Brazil show the functionalities of the tool and allow the evaluation of its applicability. Results depict that the fitness-for-purpose analysis of the OpenStreetMap data reveals promising results to identify whether the sketch map approach can be used in a certain area or if citizens might have problems with marking their flood experiences. In this way, an intrinsic quality analysis is incorporated into a participatory mapping approach. Additionally, different paper formats offered for printing enable not only individual mapping but also group mapping. Future work will focus on advancing the automation of all steps of the tool to allow members of local governments without specific technical knowledge to apply the Sketch Map Tool for their own study areas.
1.Thedescriptionandanalysisofanimalbehavioroverlongperiodsoftimeisoneof the most important challenges in ecology. However, most of these studies are limited due to the time and cost required by human observers. The collection of data via video recordings allows observation periods to be extended. However, their evaluation by human observers is very time-consuming. Progress in automated evaluation, using suitable deep learning methods, seems to be a forward-looking approach to analyze even large amounts of video data in an adequate time frame.
2. In this study, we present a multistep convolutional neural network system for detecting three typical stances of African ungulates in zoo enclosures which works with high accuracy. An important aspect of our approach is the introduction of model averaging and postprocessing rules to make the system robust to outliers.
3. Our trained system achieves an in-domain classification accuracy of >0.92, which is improved to >0.96 by a postprocessing step. In addition, the whole system per- forms even well in an out-of-domain classification task with two unknown types, achieving an average accuracy of 0.93. We provide our system at https://github. com/Klimroth/Video-Action-Classifier-for-African-Ungulates-in-Zoos/tree/main/ mrcnn_based so that interested users can train their own models to classify im- ages and conduct behavioral studies of wildlife.
4. The use of a multistep convolutional neural network for fast and accurate clas- sification of wildlife behavior facilitates the evaluation of large amounts of image data in ecological studies and reduces the effort of manual analysis of images to a high degree. Our system also shows that postprocessing rules are a suitable way to make species-specific adjustments and substantially increase the accuracy of the description of single behavioral phases (number, duration). The results in the out-of-domain classification strongly suggest that our system is robust and achieves a high degree of accuracy even for new species, so that other settings (e.g., field studies) can be considered.
When we browse via WiFi on our laptop or mobile phone, we receive data over a noisy channel. The received message may differ from the one that was sent originally. Luckily it is often possible to reconstruct the original message but it may take a lot of time. That’s because decoding the received message is a complex problem, NP-hard to be exact. As we continue browsing, new information is sent to us in a high frequency. So if lags are to be avoided and as memory is finite, there is not much time left for decoding. Coding theory tackles this problem by creating models of the channels we use to communicate and tailor codes based on the channel properties. A well known family of codes are Low-Density Parity-Check codes (LDPC codes), they are widely used in standards like WiFi and DVB-T2. In practical settings the complexity of decoding a received message can be heavily reduced by using LDPC codes and approximative decoding algorithms. This thesis lays out the basic construction of LDPC codes and a proper decoding using the sum-product algorithm. On this basis a neural network to improve decoding is introduced. Therefore the sum-product algorithm is transformed into a neural network decoder. This approach was first presented by Nachmani et al. and treated in detail by Navneet Agrawal in 2017. To find out how machine learning can improve the codes, the bit error rates of the trained neural network decoder are compared with the bit error rates of the classic sum-product algorithm approach. Experiments with static and dynamic training datasets of diverse sizes, various signal-to-noise ratios, a feed forward as well as a recurrent architecture show how to tune the neural network decoder even further. Results of the experiments are used to verify statements made in Agrawal’s work. In addition, corrections and improvements in the area of metrics are presented. An implementation of the neural network to facilitate access for others will be made available to the public.
Sample-based longitudinal discrete choice experiments: preferences for electric vehicles over time
(2021)
Discrete choice experiments have emerged as the state-of-the-art method for measuring preferences, but they are mostly used in cross-sectional studies. In seeking to make them applicable for longitudinal studies, our study addresses two common challenges: working with different respondents and handling altering attributes. We propose a sample-based longitudinal discrete choice experiment in combination with a covariate-extended hierarchical Bayes logit estimator that allows one to test the statistical significance of changes. We showcase this method’s use in studies about preferences for electric vehicles over six years and empirically observe that preferences develop in an unpredictable, non-monotonous way. We also find that inspecting only the absolute differences in preferences between samples may result in misleading inferences. Moreover, surveying a new sample produced similar results as asking the same sample of respondents over time. Finally, we experimentally test how adding or removing an attribute affects preferences for the other attributes.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
The ongoing digitalization of educational resources and the use of the internet lead to a steady increase of potentially available learning media. However, many of the media which are used for educational purposes have not been designed specifically for teaching and learning. Usually, linguistic criteria of readability and comprehensibility as well as content-related criteria are used independently to assess and compare the quality of educational media. This also holds true for educational media used in economics. This article aims to improve the analysis of textual learning media used in economic education by drawing on threshold concepts. Threshold concepts are key terms in knowledge acquisition within a domain. From a linguistic perspective, however, threshold concepts are instances of specialized vocabularies, exhibiting particular linguistic features. In three kinds of (German) resources, namely in textbooks, in newspapers, and on Wikipedia, we investigate the distributive profiles of 63 threshold concepts identified in economics education (which have been collected from threshold concept research). We looked at the threshold concepts' frequency distribution, their compound distribution, and their network structure within the three kinds of resources. The two main findings of our analysis show that firstly, the three kinds of resources can indeed be distinguished in terms of their threshold concepts' profiles. Secondly, Wikipedia definitely shows stronger associative connections between economic threshold concepts than the other sources. We discuss the findings in relation to adequate media use for teaching and learning—not only in economic education.
This study explores how ‘gatherings’ turn into ‘encounters’ in a virtual world (VW) context. Most communication technologies enable only focused encounters between distributed participants, but in VWs both gatherings and encounters can occur. We present close sequential analysis of moments when after a silent gathering, interaction among participants in a VW is gradually resumed, and also investigate the social actions in the verbal (re-)opening turns. Our findings show that like in face-to-face situations, also in VWs participants often use different types of embodied resources to achieve the transition, rather than rely on verbal means only. However, the transition process in VWs has distinctive characteristics compared to the one in face-to-face situations. We discuss how participants in a VW use virtually embodied pre-beginnings to display what we call encounter-readiness, instead of displaying lack of presence by avatar stillness. The data comprise 40 episodes of video-recorded team interactions in a VW.
Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.
We present an immersed boundary method for the solution of elliptic interface problems with discontinuous coefficients which provides a second-order approximation of the solution. The proposed method can be categorised as an extended or enriched finite element method. In contrast to other extended FEM approaches, the new shape functions get projected in order to satisfy the Kronecker-delta property with respect to the interface. The resulting combination of projection and restriction was already derived in Höllbacher and Wittum (TBA, 2019a) for application to particulate flows. The crucial benefits are the preservation of the symmetry and positive definiteness of the continuous bilinear operator. Besides, no additional stabilisation terms are necessary. Furthermore, since our enrichment can be interpreted as adaptive mesh refinement, the standard integration schemes can be applied on the cut elements. Finally, small cut elements do not impair the condition of the scheme and we propose a simple procedure to ensure good conditioning independent of the location of the interface. The stability and convergence of the solution will be proven and the numerical tests demonstrate optimal order of convergence.