Refine
Document Type
- Article (2)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- OSCE (2) (remove)
Institute
- Medizin (2)
Background: The Objective Structured Clinical Examination (OSCE) is increasingly used at medical schools to assess practical competencies. To compare the outcomes of students at different medical schools, we introduced standardized OSCE stations with identical checklists.
Methods: We investigated examiner bias at standardized OSCE stations for knee- and shoulder-joint examinations, which were implemented into the surgical OSCE at five different medical schools. The checklists for the assessment consisted of part A for knowledge and performance of the skill and part B for communication and interaction with the patient. At each medical faculty, one reference examiner also scored independently to the local examiner. The scores from both examiners were compared and analysed for inter-rater reliability and correlation with the level of clinical experience. Possible gender bias was also evaluated.
Results: In part A of the checklist, local examiners graded students higher compared to the reference examiner; in part B of the checklist, there was no trend to the findings. The inter-rater reliability was weak, and the scoring correlated only weakly with the examiner’s level of experience. Female examiners rated generally higher, but male examiners scored significantly higher if the examinee was female.
Conclusions: These findings of examiner effects, even in standardized situations, may influence outcome even when students perform equally well. Examiners need to be made aware of these biases prior to examining.
Background: Feedback is an essential element of learning. Despite this, students complain about receiving too little feedback in medical examinations, e.g., in an objective structured clinical examination (OSCE). This study aims to implement a written structured feedback tool for use in OSCEs and to analyse the attitudes of students and examiners towards this kind of feedback.
Methods: The participants were OSCE examiners and third-year medical students. This prospective study was conducted using a multistage design. In the first step, an unstructured interrogation of the examiners formed the basis for developing a feedback tool, which was evaluated and then adopted in the next steps.
Results: In total, 351 students and 51 examiners participated in this study. A baseline was created for each category of OSCE station and was supplemented with station-specific items. Each of these items was rated on a three-point scale. In addition to the preformulated answer options, each domain had space for individual comments.
A total of 87.5% of the students and 91.6% of the examiners agreed or rather agreed that written feedback should continue to be used in upcoming OSCEs.
Conclusion: The implementation of structured, written feedback in a curricular, summative examination is possible, and examiners and students would like the feedback to be constant.