Towards automatic collaboration analytics for group speech data using learning analytics

  • Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics.

Download full text files

Export metadata

Metadaten
Author:Sambit Praharaj, Maren ScheffelORCiDGND, Marcel Schmitz, Marcus Specht, Hendrik DrachslerORCiDGND
URN:urn:nbn:de:hebis:30:3-621172
DOI:https://doi.org/10.3390/s21093156
ISSN:1424-8220
Parent Title (English):Sensors
Publisher:MDPI
Place of publication:Basel
Document Type:Article
Language:English
Date of Publication (online):2021/05/02
Date of first Publication:2021/05/02
Publishing Institution:Universitätsbibliothek Johann Christian Senckenberg
Release Date:2021/08/18
Tag:co-located collaboration analytics; collaboration; collaboration analytics; group speech analytics; multimodal learning analytics
Volume:21
Issue:9, art. 3156
Page Number:22
First Page:1
Last Page:22
HeBIS-PPN:486698882
Institutes:Informatik und Mathematik
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
6 Technik, Medizin, angewandte Wissenschaften / 62 Ingenieurwissenschaften / 620 Ingenieurwissenschaften und zugeordnete Tätigkeiten
Sammlungen:Universitätspublikationen
Licence (German):License LogoCreative Commons - Namensnennung 4.0