Quality of courses evaluated by 'predictions' rather than opinions: Fewer respondents needed for similar results

Med Teach. 2010;32(10):851-6. doi: 10.3109/01421591003697465.

Abstract

Background: A well-known problem with student surveys is a too low response rate. Experiences with predicting electoral outcomes, which required much smaller sample sizes, inspired us to adopt a similar approach to course evaluation. We expected that having respondents estimate the average opinions of their peers required fewer respondents for comparable outcomes than giving own opinions.

Methods: Two course evaluation studies were performed among successive first-year medical students (N = 380 and 450, respectively). Study 1: Half the cohort gave opinions on nine questions, while the other half predicted the average outcomes. A prize was offered for the three best predictions (motivational remedy). Study 2: Half the cohort gave opinions, a quarter made predictions without a prize and a quarter made predictions with previous year's results as prior knowledge (cognitive remedy). The numbers of respondents required for stable outcomes were determined following an iterative process. Differences between numbers of respondents required and between average scores were analysed with ANOVA.

Results: In both studies, the prediction conditions required significantly fewer respondents (p < 0.001) for comparable outcomes. The informed prediction condition required the fewest respondents (N < 20).

Conclusion: Problems with response rates can be reduced by asking respondents to predict evaluation outcomes rather than giving opinions.

MeSH terms

  • Curriculum / standards*
  • Education, Medical
  • Humans
  • Netherlands
  • Program Evaluation / methods*
  • Quality Control*
  • Sample Size
  • Students, Medical / psychology
  • Surveys and Questionnaires