004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2019 (41) (remove)
Document Type
- Article (17)
- Doctoral Thesis (9)
- Working Paper (5)
- Bachelor Thesis (3)
- Preprint (3)
- Conference Proceeding (2)
- Book (1)
- Contribution to a Periodical (1)
Has Fulltext
- yes (41)
Is part of the Bibliography
- no (41) (remove)
Keywords
- concurrency (3)
- BioCreative V.5 (2)
- BioNLP (2)
- Multimodal Learning Analytics (2)
- Named entity recognition (2)
- Petrov-Galerkin finite volumes (2)
- Virtuelle Realität (2)
- functional programming (2)
- pi-calculus (2)
- ALICE (1)
Institute
- Informatik (17)
- Informatik und Mathematik (8)
- Frankfurt Institute for Advanced Studies (FIAS) (4)
- Medizin (3)
- Biowissenschaften (2)
- Center for Scientific Computing (CSC) (2)
- Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (2)
- Gesellschaftswissenschaften (2)
- Kulturwissenschaften (1)
- Neuere Philologien (1)
Die folgende Arbeit handelt von einem Human Computer Interaction Interface, welches es gestattet, mit Hilfe von Gesten zu schreiben. Das System ermöglicht seinen Nutzern, neue Gesten hinzuzufügen und zu verwenden. Da Gesten besser erkannt werden können, je genauer die Darstellung der Hände ist, wird diese durch Datenhandschuhe an den Computer übertragen. Die Hände werden einerseits in der Virtual Reality (VR) dargestellt, damit sie der Nutzer sieht. Andererseits werden die Daten, die die Gestenerkennung benötigt, an das Interface weitergeleitet. Die Erkennung der Gesten wird mit Hilfe eines Neuronales Netz (NN) implementiert. Dieses ist in der Lage, Gesten zu unterscheiden, sofern es genügend Trainingsdaten erhalten hat. Die genutzten Gesten sind entweder einhändig oder beidhändig auszuführen. Die Aussagen der Gesten beziehen sich in dieser Arbeit vor allem auf relationale Operatoren, die Beziehungen zwischen Objekten ausdrücken, wie beispielsweise „gleich“ oder „größer gleich“. Abschließend wird in dieser Arbeit ein System geschaffen, das es ermöglicht, mit Gesten Sätze auszudrücken. Dies betrifft das sogenannte gestische Schreiben nach Mehler, Lücking und Abrami 2014. Zu diesem Zweck befindet sich der Nutzer in einem virtuellen Raum mit Objekten, die er verknüpfen kann, wobei er Sätze in einem relationalen Kontext manifestiert.
Relying on the theory of Saward (2010) and Disch (2015), we study political representation through the lens of representative claim-making. We identify a gap between the theoretical concept of claim-making and the empirical (quantitative) assessment of representative claims made in the real world’s representative contexts. Therefore, we develop a new approach to map and quantify representative claims in order to subsequently measure the reception and validation of the claims by the audience. To test our method, we analyse all the debates of the German parliament concerned with the introduction of the gender quota in German supervisory boards from 2013 to 2017 in a two-step process. At first, we assess which constituencies the MPs claim to represent and how they justify their stance. Drawing on multiple correspondence analysis, we identify different claim patterns. Second, making use of natural language processing techniques and logistic regression on social media data, we measure if and how the asserted claims in the parliamentary debates are received and validated by the respective audience. We come to the conclusion that the constituency as ultimate judge of legitimacy has not been comprehensively conceptualized yet.
In this contribution, two open problems in computational stemmatology are being considered. The first one is contamination, an umbrella term referring to all phenomena of admixture of text variants resulting from scribes considering more than one manuscript or even memory when copying a text. This problem is one of the biggest to date in stemmatology since it implies an entirely different formal approach to the reconstruction of the copy history of a tradition and in turn to the reconstruction of an urtext. (Maas 1937) famously stated that there is no remedy against contamination and (Pasquali and Pieraccioni 1952) coined the terms 'open' vs. 'closed' recensions to distinguish contaminated from uncontaminated. We present a graph theoretical model which formally accommodates traditions with any degree of contamination while maintaining a temporal ordering and give combinatorial numbers and formula on the implication for numbers of possible scenarios.
Summary: Understanding the role of short-interfering RNA (siRNA) in diverse biological processes is of current interest and often approached through small RNA sequencing. However, analysis of these datasets is difficult due to the complexity of biological RNA processing pathways, which differ between species. Several properties like strand specificity, length distribution, and distribution of soft-clipped bases are few parameters known to guide researchers in understanding the role of siRNAs. We present RAPID, a generic eukaryotic siRNA analysis pipeline, which captures information inherent in the datasets and automatically produces numerous visualizations as user-friendly HTML reports, covering multiple categories required for siRNA analysis. RAPID also facilitates an automated comparison of multiple datasets, with one of the normalization techniques dedicated for siRNA knockdown analysis, and integrates differential expression analysis using DESeq2. RAPID is available under MIT license at https://github.com/SchulzLab/RAPID. We recommend using it as a conda environment available from https://anaconda.org/bioconda/rapid.
Understanding the role of short-interfering RNA (siRNA) in diverse biological processes is of current interest and often approached through small RNA sequencing. However, analysis of these datasets is difficult due to the complexity of biological RNA processing pathways, which differ between species. Several properties like strand specificity, length distribution, and distribution of soft-clipped bases are few parameters known to guide researchers in understanding the role of siRNAs. We present RAPID, a generic eukaryotic siRNA analysis pipeline, which captures information inherent in the datasets and automatically produces numerous visualizations as user-friendly HTML reports, covering multiple categories required for siRNA analysis. RAPID also facilitates an automated comparison of multiple datasets, with one of the normalization techniques dedicated for siRNA knockdown analysis, and integrates differential expression analysis using DESeq2.
The development of multimodal sensor-based applications designed to support learners with the improvement of their skills is expensive since most of these applications are tailor-made and built from scratch. In this paper, we show how the Presentation Trainer (PT), a multimodal sensor-based application designed to support the development of public speaking skills, can be modularly extended with a Virtual Reality real-time feedback module (VR module), which makes usage of the PT more immersive and comprehensive. The described study consists of a formative evaluation and has two main objectives. Firstly, a technical objective is concerned with the feasibility of extending the PT with an immersive VR Module. Secondly, a user experience objective focuses on the level of satisfaction of interacting with the VR extended PT. To study these objectives, we conducted user tests with 20 participants. Results from our test show the feasibility of modularly extending existing multimodal sensor-based applications, and in terms of learning and user experience, results indicate a positive attitude of the participants towards using the application (PT+VR module).
Browsing the web for school: social inequality in adolescents’ school-related use of the internet
(2019)
This article examines whether social inequality exists in European adolescents’ school-related Internet use regarding consuming (browsing) and productive (uploading/sharing) activities. These school-related activities are contrasted with adolescents’ Internet activities for entertainment purposes. Data from the Programme for International Student Assessment (PISA) 2012 is used for the empirical analyses. Results of partial proportional odds models show that students with higher educated parents and more books at home tend to use the Internet more often for school-related tasks than their less privileged counterparts. This pattern is similar for school-related browsing and sharing Internet activities. In contrast to these findings on school-related Internet activities, a negative association between parental education and books at home is found with adolescents’ frequency of using the Internet for entertainment purposes. The implications of digital inequalities for educational inequalities are discussed.
The state-of-the-art pattern recognition method in machine learning (deep convolution neural network) is used to identify the equation of state (EoS) employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in QCD. The EoS-meter is model independent and insensitive to other simulation inputs including the initial conditions and shear viscosity for hydrodynamic simulations. Through this study we demonstrate that there is a traceable encoder of the dynamical information from the phase structure that survives the evolution and exists in the final snapshot of heavy ion collisions and one can exclusively and effectively decode these information from the highly complex final output with machine learning when traditional methods fail. Besides the deep neural network, the performance of traditional machine learning classifiers are also provided.
The main contribution of the thesis is in helping to understand which software system parameters mostly affect the performance of Big Data Platforms under realistic workloads. In detail, the main research contributions of the thesis are:
1. Definition of the new concept of heterogeneity for Big Data Architectures (Chapter 2);
2. Investigation of the performance of Big Data systems (e.g. Hadoop) in virtualized environments (Section 3.1);
3. Investigation of the performance of NoSQL databases versus Hadoop distributions (Section 3.2);
4. Execution and evaluation of the TPCx-HS benchmark (Section 3.3);
5. Evaluation and comparison of Hive and Spark SQL engines using benchmark queries (Section 3.4);
6. Evaluation of the impact of compression techniques on SQL-on-Hadoop engine performance (Section 3.5);
7. Extensions of the standardized Big Data benchmark BigBench (TPCx-BB)(Section 4.1 and 4.3);
8. Definition of a new benchmark, called ABench (Big Data Architecture Stack Benchmark), that takes into account the heterogeneity of Big Data architectures (Section 4.5).
The thesis is an attempt to re-define system benchmarking taking into account the new requirements posed by the Big Data applications. With the explosion of Artificial Intelligence (AI) and new hardware computing power, this is a first step towards a more holistic approach to benchmarking.