Bias in Odds Ratios From Logistic Regression Methods With Sparse Data Sets

J Epidemiol. 2023 Jun 5;33(6):265-275. doi: 10.2188/jea.JE20210089. Epub 2022 Apr 1.

Abstract

Background: Logistic regression models are widely used to evaluate the association between a binary outcome and a set of covariates. However, when there are few study participants at the outcome and covariate levels, the models lead to bias of the odds ratio (OR) estimated using the maximum likelihood (ML) method. This bias is known as sparse data bias, and the estimated OR can yield impossibly large values because of data sparsity. However, this bias has been ignored in most epidemiological studies.

Methods: We review several methods for reducing sparse data bias in logistic regression. The primary aim is to evaluate the Bayesian methods in comparison with the classical methods, such as the ML, Firth's, and exact methods using a simulation study. We also apply these methods to a real data set.

Results: Our simulation results indicate that the bias of the OR from the ML, Firth's, and exact methods is considerable. Furthermore, the Bayesian methods with hyper-ɡ prior modeling of the prior covariance matrix for regression coefficients reduced the bias under the null hypothesis, whereas the Bayesian methods with log F-type priors reduced the bias under the alternative hypothesis.

Conclusion: The Bayesian methods using log F-type priors and hyper-ɡ prior are superior to the ML, Firth's, and exact methods when fitting logistic models to sparse data sets. The choice of a preferable method depends on the null and alternative hypothesis. Sensitivity analysis is important to understand the robustness of the results in sparse data analysis.

Keywords: Bayesian methods; Firth’s penalization; exact logistic regression method; ɡ-prior.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Bayes Theorem
  • Bias
  • Computer Simulation
  • Humans
  • Japan
  • Logistic Models*
  • Odds Ratio