The Curse of Performative User Studies

IEEE Comput Graph Appl. 2023 Nov-Dec;43(6):112-116. doi: 10.1109/MCG.2023.3315759.

Abstract

Computer graphics research frequently evaluates research outputs with user studies, often through online crowdworking platforms. When performed carefully and thoughtfully, studies on human behavior and preferences provide valuable insights, useful for both developing and evaluating new tools. Yet, I argue that many of the current studies are performative: they result from reviewers' expectation that "papers should have some evaluation," not from careful thought about the value and usefulness of the studies themselves. These casually done studies are often uninformative or misleading, while putting undue burden on authors and reviewers. The expectation of positive user evaluation results can also inhibit creative new work. I call for reviewers to be more thoughtful about asking for user studies, for authors to be more thoughtful when they perform studies, and for our field to conduct new research and create new guidelines on when and how user studies are genuinely useful.