ABSTRACT
Introduction: Over the years, performance assessment (PA) has been widely employed in medical education, Objective Structured Clinical Examination (OSCE) being an excellent ex ample. Typically, performance assessment involves multiple raters, and therefore, consistency among the scores provided by the auditors is a precondition to ensure the accuracy of the assessment. Inter-rater agreement and inter-rater reliability are two indices that are used to ensure such scoring consistency. This research primarily examined the relationship between inter-rater agreement and inter-rater reliability.
Materials and Methods: This study used 3 sets of simulated data that was based on raters’ evaluation of student performance to examine the relationship between inter-rater agreement and inter-rater reliability. Results: Data set 1 had high inter-rater agreement but low inter-rater reliability, data set 2 had high inter-rater reliability but low inter-rater agreement, and data set 3 had high inter-rater agreement and high inter-rater reliability. Conclusion: Inter-rater agreement and inter-rater reliability can but do not necessarily coexist. The presence of one does not guarantee that of the other. Inter-rater agreement and inter-rater reliability are both important for PA. The former shows stability of scores a student receives from different raters, while the latter shows the consistence of scores across different students from different raters.The evaluation of clinical performance is not only important in healthcare but also in medical education. In medical education, Objective Structured Clinical Examination (OSCE) and the mini-clinical evaluation exercise (mini-CEX) are 2 types of performance assessment (PA) used to measure medical students’ clinical performance.
This article is available only as a PDF. Please click on “Download PDF” on top to view the full article.