Validating Human-Operant Software: A Case Example

Behav Anal (Wash D C). 2022 Nov;22(4):389-403. doi: 10.1037/bar0000244. Epub 2022 Apr 21.

Abstract

Human-operant experiments conducted with computer software facilitate translational research by assessing the generality of basic research findings and exploring previously untested predictions about behavior in a cost-effective and efficient manner. However, previous human-operant research with computer-based tasks has included little or no description of rigorous validation procedures for the experimental apparatus (i.e., the software used in the experiment). This omission, combined with a general lack of guidance regarding how to thoroughly validate experimental software, introduces the possibility that nascent researchers may insufficiently validate their computer-based apparatus. In this paper, we provide a case example to demonstrate the rigor required to validate experimental software by describing the procedures we used to validate the apparatus reported by Smith and Greer (2021) to assess relapse via a crowdsourcing platform. The validation procedures identified several issues with early iterations of the software, demonstrating how failing to validate human-operant software can introduce confounds into similar experiments. We describe our validation procedures in detail so that others exploring similar computer-based research may have an exemplar for the rigorous testing needed to validate computer software to ensure precision and reliability in computer-based, human-operant experiments.

Keywords: apparatus validation; computer software; human operant; translational research.