Understanding data quality: Instructional comprehension as a practical metric in crowdsourced investigations of behavioral economic cigarette demand

Exp Clin Psychopharmacol. 2022 Aug;30(4):415-423. doi: 10.1037/pha0000579.

Abstract

Crowdsourcing platforms allow researchers to quickly recruit and collect behavioral economic measures in substance-using populations, such as cigarette smokers. Despite the broad utility and flexibility, data quality issues have been an object of concern. In two separate studies recruiting cigarette smokers, we sought to investigate the association between a practical quality control measure (accuracy on an instruction quiz), on internal consistency of number of cigarettes smoked per day and purchasing patterns of tobacco products in an experimental tobacco marketplace (ETM; Study 1), and in a cigarette purchase task (CPT; Study 2). Participants (N = 312 in Study 1; N = 119 in Study 2) were recruited from Amazon mechanical turk. Both studies included task instructions, a quiz, a purchase task, cigarette usage and dependence questions, and demographics. The results show that participants who answered all instruction items correctly: (a) reported the number of cigarettes per day more consistently (partial η² = 0.11, p < .001, Study 1; partial η² = 0.09, p = .016, Study 2), (b) demonstrated increased model fit among the cigarette demand curves (partial η² = 0.23, p < .001, Study 1; partial η² = 0.08, p = .002, Study 2), and purchased tobacco products in the ETM more consistently with their current usage. We conclude that instruction quizzes before purchase tasks may be useful for researchers evaluating demand data. Instruction quizzes with multiple items may allow researchers to choose the level of data quality appropriate for their studies. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

MeSH terms

  • Comprehension
  • Crowdsourcing*
  • Economics, Behavioral
  • Humans
  • Smokers
  • Tobacco Products*