Refine
Document Type
- Article (2)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Assessment (1)
- Bias (1)
- Medical student (1)
- OSCE (1)
- Practical skills (1)
- collaborative feedback (1)
- educational theory (1)
- faculty development (1)
- medical education (1)
- medical teacher (1)
Institute
- Medizin (2)
Background: The Objective Structured Clinical Examination (OSCE) is increasingly used at medical schools to assess practical competencies. To compare the outcomes of students at different medical schools, we introduced standardized OSCE stations with identical checklists.
Methods: We investigated examiner bias at standardized OSCE stations for knee- and shoulder-joint examinations, which were implemented into the surgical OSCE at five different medical schools. The checklists for the assessment consisted of part A for knowledge and performance of the skill and part B for communication and interaction with the patient. At each medical faculty, one reference examiner also scored independently to the local examiner. The scores from both examiners were compared and analysed for inter-rater reliability and correlation with the level of clinical experience. Possible gender bias was also evaluated.
Results: In part A of the checklist, local examiners graded students higher compared to the reference examiner; in part B of the checklist, there was no trend to the findings. The inter-rater reliability was weak, and the scoring correlated only weakly with the examiner’s level of experience. Female examiners rated generally higher, but male examiners scored significantly higher if the examinee was female.
Conclusions: These findings of examiner effects, even in standardized situations, may influence outcome even when students perform equally well. Examiners need to be made aware of these biases prior to examining.
Background: It is well accepted that medical faculty teaching staff require an understanding of educational theory and pedagogical methods for effective medical teaching. The purpose of this study was to evaluate the effectiveness of a 5-day teaching education program.
Methods: An open prospective interventional study using quantitative and qualitative instruments was performed, covering all four levels of the Kirkpatrick model: Evaluation of 1) "Reaction" on a professional and emotional level using standardized questionnaires; 2) "Learning" applying a multiple choice test; 3) "Behavior" by self-, peer-, and expert assessment of teaching sessions with semistructured interviews; and 4) "Results" from student evaluations.
Results: Our data indicate the success of the educational intervention at all observed levels. 1) Reaction: The participants showed a high acceptance of the instructional content. 2) Learning: There was a significant increase in knowledge (P<0.001) as deduced from a pre-post multiple-choice questionnaire, which was retained at 6 months (P<0.001). 3) Behavior: Peer-, self-, and expert-assessment indicated a transfer of learning into teaching performance. Semistructured interviews reflected a higher level of professionalism in medical teaching by the participants. 4) Results: Teaching performance ratings improved in students' evaluations.
Conclusions: Our results demonstrate the success of a 5-day education program in embedding knowledge and skills to improve performance of medical educators. This multimethodological approach, using both qualitative and quantitative measures, may serve as a model to evaluate effectiveness of comparable interventions in other settings.