Towards thoughtful planning of ERP studies: How participants, trials, and effect magnitude interact to influence statistical power across seven ERP components

Psychophysiology. 2023 Jul;60(7):e14245. doi: 10.1111/psyp.14245. Epub 2022 Dec 28.

Abstract

In the field of EEG, researchers generally rely on rules of thumb, rather than a priori statistical calculations, when planning the number of trials to include in an ERP study. To aid in this practice, studies have tried to establish minimum numbers of trials required to reliably isolate ERPs. However, these guidelines do not necessarily apply across different study designs, as the reliability of an ERP waveform is not the same as the statistical power of a given experiment. Experiment parameters such as number of participants, trials, and effect magnitude interact to affect power in complex ways. Both under- and over-powered ERP studies represent a waste of time and resources that impedes the progress of the field. The current study fills this gap by subsampling real ERP data to estimate the relationship between experiment design parameters and statistical power. The simulations include seven commonly studied ERP components: the ERN, LRP, N170, MMN, P3, N2pc, and N400. In the first set of experiments, we determined the probability of obtaining a statistically significant ERP effect for each component. In the second and third set of experiments, we determined the probability of obtaining a statistically significant difference in ERP amplitude within and between groups for each component. Results indicate that the rules of thumb for ERP experiment design in the literature often lead to underpowered studies. Going forward, these results provide researchers with experiment design guidelines that are specific to the component under study, allowing for the design of sufficiently powered ERP studies.

Keywords: EEG; ERPs; sample size; statistical power; trials.

MeSH terms

  • Electroencephalography* / methods
  • Evoked Potentials*
  • Female
  • Humans
  • Male
  • Reproducibility of Results
  • Research Design