Sparsity in an artificial neural network predicts beauty: Towards a model of processing-based aesthetics

PLoS Comput Biol. 2023 Dec 4;19(12):e1011703. doi: 10.1371/journal.pcbi.1011703. eCollection 2023 Dec.

Abstract

Generations of scientists have pursued the goal of defining beauty. While early scientists initially focused on objective criteria of beauty ('feature-based aesthetics'), philosophers and artists alike have since proposed that beauty arises from the interaction between the object and the individual who perceives it. The aesthetic theory of fluency formalizes this idea of interaction by proposing that beauty is determined by the efficiency of information processing in the perceiver's brain ('processing-based aesthetics'), and that efficient processing induces a positive aesthetic experience. The theory is supported by numerous psychological results, however, to date there is no quantitative predictive model to test it on a large scale. In this work, we propose to leverage the capacity of deep convolutional neural networks (DCNN) to model the processing of information in the brain by studying the link between beauty and neuronal sparsity, a measure of information processing efficiency. Whether analyzing pictures of faces, figurative or abstract art paintings, neuronal sparsity explains up to 28% of variance in beauty scores, and up to 47% when combined with a feature-based metric. However, we also found that sparsity is either positively or negatively correlated with beauty across the multiple layers of the DCNN. Our quantitative model stresses the importance of considering how information is processed, in addition to the content of that information, when predicting beauty, but also suggests an unexpectedly complex relationship between fluency and beauty.

MeSH terms

  • Art*
  • Cognition
  • Esthetics
  • Judgment* / physiology
  • Neural Networks, Computer

Grants and funding

This study was funded by the Agence Nationale de la Recherche (ANR-20-CE02-0005-01) received by JPR and WP, by the National Science Foundation (NSF IOS 2026334) received by TM and JPR, and by the CNRS through the MITI interdisciplinary programs (Programme Interne Blanc MITI 2023.1 - Projet: DEEPCOM- L'intelligence artificielle pour étudier la communication) received by WP. ND and ST received a salary from ANR-20-CE02-0005-01. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.