Addressing missing data in the estimation of time-varying treatments in comparative effectiveness research

Stat Med. 2023 Nov 30;42(27):5025-5038. doi: 10.1002/sim.9899. Epub 2023 Sep 19.

Abstract

Comparative effectiveness research is often concerned with evaluating treatment strategies sustained over time, that is, time-varying treatments. Inverse probability weighting (IPW) is often used to address the time-varying confounding by re-weighting the sample according to the probability of treatment receipt at each time point. IPW can also be used to address any missing data by re-weighting individuals according to the probability of observing the data. The combination of these two distinct sets of weights may lead to inefficient estimates of treatment effects due to potentially highly variable total weights. Alternatively, multiple imputation (MI) can be used to address the missing data by replacing each missing observation with a set of plausible values drawn from the posterior predictive distribution of the missing data given the observed data. Recent studies have compared IPW and MI for addressing the missing data in the evaluation of time-varying treatments, but they focused on missing confounders and monotone missing data patterns. This article assesses the relative advantages of MI and IPW to address missing data in both outcomes and confounders measured over time, and across monotone and non-monotone missing data settings. Through a comprehensive simulation study, we find that MI consistently provided low bias and more precise estimates compared to IPW across a wide range of scenarios. We illustrate the implications of method choice in an evaluation of biologic drugs for patients with severe rheumatoid arthritis, using the US National Databank for Rheumatic Diseases, in which 25% of participants had missing health outcomes or time-varying confounders.

Keywords: comparative effectiveness research; inverse probability weighting; missing data; multiple imputation; time-varying confounding.

MeSH terms

  • Bias
  • Comparative Effectiveness Research*
  • Computer Simulation
  • Humans
  • Probability