Design of a Situational Judgment Test for Preclinical Interprofessional Collaboration

Acad Med. 2021 Jul 1;96(7):992-996. doi: 10.1097/ACM.0000000000004117.

Abstract

Problem: There is an unmet need for economically feasible, valid, reliable, and contextually relevant assessments of interprofessional collaborative knowledge and skills, particularly at the early stages of health professions education. This study sought to develop and gather content and internal structure validity for an Interprofessional Situational Judgement Test (IPSJT), a tool for the measurement of students' interprofessional collaborative intentions during the early stages of their professional development.

Approach: After engaging in an item development and refinement process (January-June 2018), an 18-question IPSJT was administered to 953 first-year students enrolled in 10 health professions degree programs at the University of Florida Health Science Center in October 2018. The IPSJT's performance was evaluated using item-level analyses, item difficulty, test-retest reliability, and exploratory factor analysis.

Outcomes: Seven hundred thirty-seven (77.3%) students consented to the use of their data. Student IPSJT scores ranged from 0 to 69, averaging 42.68 (standard deviation = 12.28), with some statistically significant differences in student performance by health professions degree program. IPSJT item difficulties ranged from 0.13 to 0.92. Once one item with poor properties was excluded from analysis, the IPSJT demonstrated an overall reliability of 0.62. Students were more successful at identifying the least effective than the most effective responses. Test-retest reliability provided evidence of consistency (r = 0.50, P < .001) and similar item difficulty across administrations. An exploratory factor analysis indicated a 3-factor model with multiple cross-factor loadings.

Next steps: This work represents the first step toward the development of a valid, reliable IPSJT for early learners. The emergent 3-factor model provides evidence that multiple competencies can be assessed in early learners via this tool. Additional research is necessary to build a more robust question bank, explore different scoring and response methods, and gather additional sources of validity evidence, including relations to other variables.

MeSH terms

  • Academic Medical Centers / organization & administration
  • Clinical Competence / statistics & numerical data*
  • Cooperative Behavior
  • Factor Analysis, Statistical
  • Florida
  • Health Occupations / education*
  • Humans
  • Interprofessional Relations / ethics*
  • Judgment / physiology*
  • Knowledge
  • Learning / physiology
  • Outcome Assessment, Health Care
  • Reproducibility of Results
  • Research Design / statistics & numerical data
  • Students, Medical / psychology
  • Surveys and Questionnaires