Confirming the Reliability and Validity of Others' Evaluation Tools Before Adopting for Your Programs

J Nutr Educ Behav. 2017 May;49(5):441-450.e1. doi: 10.1016/j.jneb.2017.02.006.

Abstract

Objective: To confirm the reliability and validity of a previously validated evaluation instrument in a new context.

Methods: In a cross-sectional study, the processes and results of testing Cooking Matters' (CM) use of the Expanded Food and Nutrition Education Program's Behavior Checklist as a retrospective pretest/posttest were described. The researchers determined reliability, face and content validity, and response-shift bias with 95 CM participants.

Results: Most items had acceptable face validity and moderate reliability; other items lacked reliability, or face or content validity (were unrelated to the CM curriculum).

Conclusions and implications: Proper match between evaluation tools and curricula is needed for appropriate program assessment without which outcome data can be misleading or potentially invalid. Confirmation of validity is essential when adopting others' evaluation tools in new contexts, particularly for programs with widespread use such as federally funded programs and national nonprofit organizations.

Keywords: Cooking Matters; Expanded Food and Nutrition Education Program; Share Our Strength; low income; nutrition behavior checklist; validity.

MeSH terms

  • Adult
  • Cross-Sectional Studies
  • Female
  • Health Education / methods*
  • Humans
  • Male
  • Nutritional Sciences / education*
  • Nutritional Sciences / standards*
  • Program Evaluation / standards*
  • Reproducibility of Results