Longitudinal assessment in an undergraduate longitudinal integrated clerkship: the mini Clinical Evaluation Exercise (mCEX) profile

Med Teach. 2013 Aug;35(8):e1416-21. doi: 10.3109/0142159X.2013.778392. Epub 2013 Apr 2.

Abstract

Aim: Student and assessor performance were examined over three academic years using the mini Clinical Evaluation Exercise (mCEX) as a continuous feedback tool across all disciplines, in all learning contexts, for an entire integrated undergraduate year.

Methods: Students could complete any number of mCEX, but had to submit a minimum number per discipline. Students were free to choose assessors. Assessors were not trained. Data were collected in a customised database, and analysed in SPSS ver 18.0.0.

Results: 5686 mCEX were submitted during 2008-2010 (Cronbach's α = 0.80). Marks were affected by doctor grade (F = 146.6, p < 0.000), difficulty of clinical encounter (F = 33.3, p < 0.000) and clinical discipline (F = 13.8, p < 0.000). Students most frequently sought harder markers (experienced general practitioner/hospital specialists). Increases in mCEX marks were greatest during the early, formative months (F = 42.7, p < 0.000). More mCEX were submitted than required, without differentiation between weak or strong students (rxy = 0.22, p = 0.78).

Conclusions: Undergraduate students in longitudinal clerkships acquire most skills during 'formative' learning. They seek 'hard' assessors, consistent with year-long mentoring relationships and educational/feedback value. Assessors mark consistent with a framework of encouraging student performance. Over an entire longitudinal clerkship, students complete mCEX in excess of course requirements. This study confirms the impact of the longitudinal context on assessor and student behaviour.

MeSH terms

  • Clinical Clerkship / organization & administration*
  • Clinical Competence
  • Education, Medical, Undergraduate / organization & administration*
  • Educational Measurement / methods
  • Feedback
  • Humans
  • Longitudinal Studies
  • Medicine
  • Program Evaluation
  • Reproducibility of Results