Refine
Document Type
- Article (2)
- Contribution to a Periodical (1)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- PISA (1)
- assessment framework (1)
- behavioral indicators (1)
- complex problem-solving (1)
- exploration (1)
- information and communication technology skills (1)
- log data (1)
- performance items (1)
- sequence analysis (1)
- validation (1)
Institute
- Psychologie (3) (remove)
Das Projekt »Digi_Gap – Digitale Lücken in der Lehrkräftebildung schließen« wird von 2020 bis 2023 vom Bundesministerium für Bildung und Forschung im Rahmen der Qualitätsoffensive Lehrerbildung (QLB) gefördert (Fördersumme: 1 678 023 Euro) und umfasst fünf Teilprojekte mit 19 WissenschaftlerInnen aus neun Fachbereichen an der Goethe-Universität. Geleitet wird das Projekt von Prof. Dr. Holger Horz (wissenschaftliche Gesamtprojektleitung) und Dr. Claudia Burger (operative Leitung). An der Goethe-Universität ebenfalls durch die QLB gefördert wird »The Next Level«, das Nachfolgeprojekt von »Level« (»Lehrerbildung vernetzt entwickeln)«, mit dem Digi_Gap inhaltlich und strukturell eng verbunden ist. Das Leitungs- und Koordinationsteam (Leitung: Holger Horz & Claudia Burger; Koordination: Johannes Appel und Annika Kreft) von Digi_Gap hat sich den Fragen des UniReport auch zum aktuellen Thema »Homeschooling« gestellt.
This paper addresses the development of performance-based assessment items for ICT skills, skills in dealing with information and communication technologies, a construct which is rather broadly and only operationally defined. Item development followed a construct-driven approach to ensure that test scores could be interpreted as intended. Specifically, ICT-specific knowledge as well as problem-solving and the comprehension of text and graphics were defined as components of ICT skills and cognitive ICT tasks (i.e., accessing, managing, integrating, evaluating, creating). In order to capture the construct in a valid way, design principles for constructing the simulation environment and response format were formulated. To empirically evaluate the very heterogeneous items and detect malfunctioning items, item difficulties were analyzed and behavior-related indicators with item-specific thresholds were developed and applied. The 69 item’s difficulty scores from the Rasch model fell within a comparable range for each cognitive task. Process indicators addressing time use and test-taker interactions were used to analyze whether most test-takers executed the intended processes, exhibited disengagement, or got lost among the items. Most items were capable of eliciting the intended behavior; for the few exceptions, conclusions for item revisions were drawn. The results affirm the utility of the proposed framework for developing and implementing performance-based items to assess ICT skills.
In this explorative study, we investigate how sequences of behaviour are related to success or failure in complex problem‐solving (CPS). To this end, we analysed log data from two different tasks of the problem‐solving assessment of the Programme for International Student Assessment 2012 study (n = 30,098 students). We first coded every interaction of students as (initial or repeated) exploration, (initial or repeated) goal‐directed behaviour, or resetting the task. We then split the data according to task successes and failures. We used full‐path sequence analysis to identify groups of students with similar behavioural patterns in the respective tasks. Double‐checking and minimalistic behaviour was associated with success in CPS, while guessing and exploring task‐irrelevant content was associated with failure. Our findings held for both tasks investigated, from two different CPS measurement frameworks. We thus gained detailed insight into the behavioural processes that are related to success and failure in CPS.