Statistical analysis of the primary outcome in acute stroke trials

Stroke. 2012 Apr;43(4):1171-8. doi: 10.1161/STROKEAHA.111.641456. Epub 2012 Mar 15.

Abstract

Common outcome scales in acute stroke trials are ordered categorical or pseudocontinuous in structure but most have been analyzed as binary measures. The use of fixed dichotomous analysis of ordered categorical outcomes after stroke (such as the modified Rankin Scale) is rarely the most statistically efficient approach and usually requires a larger sample size to demonstrate efficacy than other approaches. Preferred statistical approaches include sliding dichotomous, ordinal, or continuous analyses. Because there is no best approach that will work for all acute stroke trials, it is vital that studies are designed with a full understanding of the type of patients to be enrolled (in particular their case mix, which will be critically dependent on their age and severity), the potential mechanism by which the intervention works (ie, will it tend to move all patients somewhat, or some patients a lot, and is a common hazard present), a realistic assessment of the likely effect size, and therefore the necessary sample size, and an understanding of what the intervention will cost if implemented in clinical practice. If these approaches are followed, then the risk of missing useful treatment effects for acute stroke will diminish.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Humans
  • Randomized Controlled Trials as Topic / economics
  • Stroke / economics*
  • Stroke / mortality*
  • Stroke / therapy*