The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review

Med Decis Making. 2022 Jul;42(5):684-703. doi: 10.1177/0272989X211053794. Epub 2021 Oct 25.

Abstract

Background: Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual.

Objectives: We conducted a systematic review to compare EE methods used to generate model input parameters in health research.

Data sources: PubMed and Web of Science.

Study eligibility: Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes.

Data abstraction and synthesis: Studies were classified as "formal" EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as "indeterminate" methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods.

Study appraisal: Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods.

Results: The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16.

Limitations: Elicitations on nonhealth topics and those reported in the gray literature were not included.

Conclusions: We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models.

Highlights: We find extensive use of expert elicitation for the development of model input parameters, but most studies do not provide adequate details of their elicitation methods.Lack of reporting hinders greater discussion of the merits and challenges of using expert elicitation for model input parameter development.There is a need to establish expert elicitation best practices and reporting guidelines.

Keywords: computational modeling; expert elicitation; expert judgment; expert opinion; systematic review.

Publication types

  • Systematic Review
  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Computer Simulation
  • Cost-Benefit Analysis
  • Expert Testimony*
  • Humans
  • Probability
  • Research Design*