Comparison of Assessment by a Virtual Patient and by Clinician-Educators of Medical Students' History-Taking Skills: Exploratory Descriptive Study

JMIR Med Educ. 2020 Mar 12;6(1):e14428. doi: 10.2196/14428.

Abstract

Background: A virtual patient (VP) can be a useful tool to foster the development of medical history-taking skills without the inherent constraints of the bedside setting. Although VPs hold the promise of contributing to the development of students' skills, documenting and assessing skills acquired through a VP is a challenge.

Objective: We propose a framework for the automated assessment of medical history taking within a VP software and then test this framework by comparing VP scores with the judgment of 10 clinician-educators (CEs).

Methods: We built upon 4 domains of medical history taking to be assessed (breadth, depth, logical sequence, and interviewing technique), adapting these to be implemented into a specific VP environment. A total of 10 CEs watched the screen recordings of 3 students to assess their performance first globally and then for each of the 4 domains.

Results: The scores provided by the VPs were slightly higher but comparable with those given by the CEs for global performance and for depth, logical sequence, and interviewing technique. For breadth, the VP scores were higher for 2 of the 3 students compared with the CE scores.

Conclusions: Findings suggest that the VP assessment gives results akin to those that would be generated by CEs. Developing a model for what constitutes good history-taking performance in specific contexts may provide insights into how CEs generally think about assessment.

Keywords: automated scoring; computer software; educational assessment; medical education; medical history taking; medical history–taking skills; medical history–taking skills assessment; medical students; simulation training; virtual patients.