Refine
Year of publication
Document Type
- Part of a Book (13) (remove)
Has Fulltext
- yes (13)
Is part of the Bibliography
- no (13) (remove)
Keywords
- E-Learning (3)
- Hochschule (3)
- Informatik (3)
- Kollaboration <Informatik> (3)
- Lehre (3)
- Präsenzlehre (3)
- Learning Analytics (2)
- Architekturen (1)
- Blended learning (1)
- Clustering (1)
Institute
- Informatik (13) (remove)
In diesem Beitrag untersuchen wir Entwicklungstendenzen von Infrastrukturen in den Digitalen Geisteswissenschaften. Wir argumentieren, dass infolge (1) der Verfügbarkeit von immer mehr Daten über sozial-semiotische Netzwerke, (2) der Methodeninflation in geisteswissenschaftlichen Disziplinen, (3) der zunehmend hybriden Arbeitsteilung zwischen Mensch und Maschine und (4) der explosionsartigen Vermehrung künstlicher Texte ein erheblicher Anpassungsdruck auf die Weiterentwicklung solcher Infrastrukturen entstanden ist. In diesem Zusammenhang beschreiben wir drei Informationssysteme, die sich unter anderem durch die Interaktionsmöglichkeiten unterscheiden, die sie ihren Nutzern bieten, um solchen Herausforderungen zu begegnen. Dabei skizzieren wir mit VienNA eine neuartige Architektur solcher Systeme, welche aufgrund ihrer Flexibilität die Möglichkeit bieten könnte, letztere Herausforderungen zu bewältigen.
Mit der Smart Learning Infrastruktur wurde ein neuartiges didaktisches Konzept für Kurse in der Weiterbildung entwickelt. Diese Infrastruktur ist vielfältig anwendbar. Erste Analysen von Kursen zeigen, dass TeilnehmerInnen, die alle Übungen korrekt abgearbeitet haben, eine bessere Note erreichen als die Durchschnittsnote. Dieser Beitrag beschreibt ein Konzept für ein Gamification-Modul, welches mit spielerischen Elementen möglichst frühzeitig dazu animiert, alle Übungen eines Kurses korrekt und mit Verstand abzuarbeiten.
We propose and create a new data model for learning specific environments and learning analytics applications. This is motivated from the experience in the Fiber Bundle Data Model used for large - time and space dependent - data. Our proposed data model integrates file or stream-based data structures from capturing devices more easily. Learning analytics algorithms are added directly to the data, and formulation of queries and analytics is done in Python. It is designed to improve collaboration in the field of learning analytics. We leverage a hierarchical data structure, where varying data is located near the leaves. Abstract data types are identified in four distinct pathways, which allow storing most diverse data sources. We compare different implementations regarding its memory footprint and performance. Our tests indicate that LeAn Bundles can be smaller than a naïve xAPI export. The benchmarks show that the performance is comparable to a MongoDB, while having the benefit of being portable and extensible.
Digitale Kompetenzen von Hochschullehrenden messen : Validierungsstudie eines Kompetenzrasters
(2018)
Der Beitrag beschreibt die Entwicklung eines Kompetenzrasters zur Erfassung digitaler Kompetenzen von Hochschullehrenden und stellt Ergebnisse der Validierung des Rasters vor. Dazu werden die Ergebnisse eines Pre-Tests (N=90) unter Teilnehmenden eines E-LearningQualifizierungsangebots inferenzstatistisch ausgewertet. Zusätzlich werden zur äußeren Validierung des Kompetenzrasters Ergebnisse mit Aussagen der Befragungsteilnehmer*innen verglichen, die mit Hilfe qualitativer Methoden aus E-Portfolios gewonnen wurden. Die skalenanalytischen Befunde erbrachten für sechs der acht Subdimensionen digitaler Kompetenz eindeutige, einfaktorielle Lösungen mit guten Varianzaufklärungen. Die Subskalen verfügen über hohe interne Konsistenzen. Zwei Dimensionen trennen sich faktorenanalytisch in weitere Subtests auf, die sich im Test ebenfalls als reliabel erweisen. Zur Validität des Kompetenzrasters konnten durch Zusammenhänge mit Aussagen aus E-Portfolios positive Belege gesammelt werden.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
This volume contains the papers presented at the First International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2014) which was held on July 13, 2014 in Vienna, Austria during the Vienna Summer of Logic 2014 (VSL 2014) as a workshop of the Sixth Federated Logic Conference (FLoC 2014). WPTE 2014 was affiliated with the 25th International Conference on Rewriting Techniques and Applications joined with the 12th International Conference on Typed Lambda Calculi and Applications (RTA/TLCA 2014).
The Internet as the biggest human library ever assembled keeps on growing. Although all kinds of information carriers (e.g. audio/video/hybrid file formats) are available, text based documents dominate. It is estimated that about 80% of all information worldwide stored electronically exists in (or can be converted into) text form. More and more, all kinds of documents are generated by means of a text processing system and are therefore available electronically. Nowadays, many printed journals are also published online and may even discontinue to appear in print form tomorrow. This development has many convincing advantages: the documents are both available faster (cf. prepress services) and cheaper, they can be searched more easily, the physical storage only needs a fraction of the space previously necessary and the medium will not age. For most people, fast and easy access is the most interesting feature of the new age; computer-aided search for specific documents or Web pages becomes the basic tool for information-oriented work. But this tool has problems. The current keyword based search machines available on the Internet are not really appropriate for such a task; either there are (way) too many documents matching the specified keywords are presented or none at all. The problem lies in the fact that it is often very difficult to choose appropriate terms describing the desired topic in the first place. This contribution discusses the current state-of-the-art techniques in content-based searching (along with common visualization/browsing approaches) and proposes a particular adaptive solution for intuitive Internet document navigation, which not only enables the user to provide full texts instead of manually selected keywords (if available), but also allows him/her to explore the whole database.
In intensive care units physicians are aware of a high lethality rate of septic shock patients. In this contribution we present typical problems and results of a retrospective, data driven analysis based on two neural network methods applied on the data of two clinical studies. Our approach includes necessary steps of data mining, i.e. building up a data base, cleaning and preprocessing the data and finally choosing an adequate analysis for the medical patient data. We chose two architectures based on supervised neural networks. The patient data is classified into two classes (survived and deceased) by a diagnosis based either on the black-box approach of a growing RBF network and otherwise on a second network which can be used to explain its diagnosis by human-understandable diagnostic rules. The advantages and drawbacks of these classification methods for an early warning system are discussed.
The encoding of images by semantic entities is still an unresolved task. This paper proposes the encoding of images by only a few important components or image primitives. Classically, this can be done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the signal processing and neural network community. Using this as pattern primitives we aim for source patterns with the highest occurrence probability or highest information. For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that the Independent Principal Components (IPC) in contrast to the Principal Independent Components (PIC) implement the classical demand of Shannon’s rate distortion theory.