Refine
Has Fulltext
- yes (8)
Is part of the Bibliography
- no (8)
Keywords
- Multimodal Learning Analytics (2)
- Higher education (1)
- Human factors (1)
- Individual differences (1)
- Kinect (1)
- Learning analytics (1)
- Student expectations (1)
- Technology-Enhanced Learning (1)
- chatbots (1)
- co-located collaboration analytics (1)
Dancing is an activity that positively enhances the mood of people that consists of feeling the music and expressing it in rhythmic movements with the body. Learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. In this paper, we present the first implementation of the Dancing Coach (DC), a generic system designed to support the practice of dancing steps, which in its current state supports the practice of basic salsa dancing steps. However, the DC has been designed to allow the addition of more dance styles. We also present the first user evaluation of the DC, which consists of user tests with 25 participants. Results from the user test show that participants stated they had learned the basic salsa dancing steps, to move to the beat and body coordination in a fun way. Results also point out some direction on how to improve the future versions of the DC.
Dieser Entwurf eines Verhaltenskodex richtet sich an Hochschulen, die mittels Learning Analytics die Qualität des Lernens und Lehrens verbessern wollen. Der Kodex kann als Vorlage zur Erstellung von organisationsspezifischen Verhaltenskodizes dienen. Er sollte an Hochschulen, die Learning Analytics einführen wollen, durch Konsultationen mit allen Interessengruppen überprüft und an die Ziele sowie die bestehende Praxis innerhalb der jeweiligen Hochschulen angepasst werden. Der Kodex wurde auf Grundlage einer Analyse bestehender europäischer Kodizes (Engelfriet, Manderveld & Jeunink, 2017; Westerlaken, Manderveld & Jorna, 2019; Sclater & Bailey, 2015; Open University UK, 2014; University of Edinburgh, 2018) und der in Deutschland geltenden Rechtsgrundlage vom Innovationsforum Trusted Learning Analytics des hessenweiten Projektes „Digital gestütztes Lehren und Lernen in Hessen“ entwickelt.
The development of multimodal sensor-based applications designed to support learners with the improvement of their skills is expensive since most of these applications are tailor-made and built from scratch. In this paper, we show how the Presentation Trainer (PT), a multimodal sensor-based application designed to support the development of public speaking skills, can be modularly extended with a Virtual Reality real-time feedback module (VR module), which makes usage of the PT more immersive and comprehensive. The described study consists of a formative evaluation and has two main objectives. Firstly, a technical objective is concerned with the feasibility of extending the PT with an immersive VR Module. Secondly, a user experience objective focuses on the level of satisfaction of interacting with the VR extended PT. To study these objectives, we conducted user tests with 20 participants. Results from our test show the feasibility of modularly extending existing multimodal sensor-based applications, and in terms of learning and user experience, results indicate a positive attitude of the participants towards using the application (PT+VR module).
Learning to solve graph tasks is one of the key prerequisites of acquiring domain-specific knowledge in most study domains. Analyses of graph understanding often use eye-tracking and focus on analyzing how much time students spend gazing at particular areas of a graph—Areas of Interest (AOIs). To gain a deeper insight into students’ task-solving process, we argue that the gaze shifts between students’ fixations on different AOIs (so-termed transitions) also need to be included in holistic analyses of graph understanding that consider the importance of transitions for the task-solving process. Thus, we introduced Epistemic Network Analysis (ENA) as a novel approach to analyze eye-tracking data of 23 university students who solved eight multiple-choice graph tasks in physics and economics. ENA is a method for quantifying, visualizing, and interpreting network data allowing a weighted analysis of the gaze patterns of both correct and incorrect graph task solvers considering the interrelations between fixations and transitions. After an analysis of the differences in the number of fixations and the number of single transitions between correct and incorrect solvers, we conducted an ENA for each task. We demonstrate that an isolated analysis of fixations and transitions provides only a limited insight into graph solving behavior. In contrast, ENA identifies differences between the gaze patterns of students who solved the graph tasks correctly and incorrectly across the multiple graph tasks. For instance, incorrect solvers shifted their gaze from the graph to the x-axis and from the question to the graph comparatively more often than correct solvers. The results indicate that incorrect solvers often have problems transferring textual information into graphical information and rely more on partly irrelevant parts of a graph. Finally, we discuss how the findings can be used to design experimental studies and for innovative instructional procedures in higher education
An exploratory latent class analysis of student expectations towards learning analytics services
(2021)
For service implementations to be widely adopted, it is necessary for the expectations of the key stakeholders to be considered. Failure to do so may lead to services reflecting ideological gaps, which will inadvertently create dissatisfaction among its users. Learning analytics research has begun to recognise the importance of understanding the student perspective towards the services that could be potentially offered; however, student engagement remains low. Furthermore, there has been no attempt to explore whether students can be segmented into different groups based on their expectations towards learning analytics services. In doing so, it allows for a greater understanding of what is and is not expected from learning analytics services within a sample of students. The current exploratory work addresses this limitation by using the three-step approach to latent class analysis to understand whether student expectations of learning analytics services can clearly be segmented, using self-report data obtained from a sample of students at an Open University in the Netherlands. The findings show that student expectations regarding ethical and privacy elements of a learning analytics service are consistent across all groups; however, those expectations of service features are quite variable. These results are discussed in relation to previous work on student stakeholder perspectives, policy development, and the European General Data Protection Regulation (GDPR).
Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics.
Digital distractions can interfere with goal attainment and lead to undesirable habits that are hard to get red rid of. Various digital self-control interventions promise support to alleviate the negative impact of digital distractions. These interventions use different approaches, such as the blocking of apps and websites, goal setting, or visualizations of device usage statistics. While many apps and browser extensions make use of these features, little is known about their effectiveness. This systematic review synthesizes the current research to provide insights into the effectiveness of the different kinds of interventions. From a search of the ‘ACM’, ‘Springer Link’, ‘Web of Science’, ’IEEE Xplore’ and ‘Pubmed’ databases, we identified 28 digital self-control interventions. We categorized these interventions according to their features and their outcomes. The interventions showed varying degrees of effectiveness, and especially interventions that relied purely on increasing the participants' awareness were barely effective. For those interventions that sanctioned the use of distractions, the current literature indicates that the sanctions have to be sufficiently difficult to overcome, as they will otherwise be quickly dismissed. The overall confidence in the results is low, with small sample sizes, short study duration, and unclear study contexts. From these insights, we highlight research gaps and close with suggestions for future research.
Chatbots are a promising technology with the potential to enhance workplaces and everyday life. In terms of scalability and accessibility, they also offer unique possibilities as communication and information tools for digital learning. In this paper, we present a systematic literature review investigating the areas of education where chatbots have already been applied, explore the pedagogical roles of chatbots, the use of chatbots for mentoring purposes, and their potential to personalize education. We conducted a preliminary analysis of 2,678 publications to perform this literature review, which allowed us to identify 74 relevant publications for chatbots’ application in education. Through this, we address five research questions that, together, allow us to explore the current state-of-the-art of this educational technology. We conclude our systematic review by pointing to three main research challenges: 1) Aligning chatbot evaluations with implementation objectives, 2) Exploring the potential of chatbots for mentoring students, and 3) Exploring and leveraging adaptation capabilities of chatbots. For all three challenges, we discuss opportunities for future research.