Compare the performance of the models in art classification

PLoS One. 2021 Mar 12;16(3):e0248414. doi: 10.1371/journal.pone.0248414. eCollection 2021.

Abstract

Because large numbers of artworks are preserved in museums and galleries, much work must be done to classify these works into genres, styles and artists. Recent technological advancements have enabled an increasing number of artworks to be digitized. Thus, it is necessary to teach computers to analyze (e.g., classify and annotate) art to assist people in performing such tasks. In this study, we tested 7 different models on 3 different datasets under the same experimental setup to compare their art classification performances when either using or not using transfer learning. The models were compared based on their abilities for classifying genres, styles and artists. Comparing the result with previous work shows that the model performance can be effectively improved by optimizing the model structure, and our results achieve state-of-the-art performance in all classification tasks with three datasets. In addition, we visualized the process of style and genre classification to help us understand the difficulties that computers have when tasked with classifying art. Finally, we used the trained models described above to perform similarity searches and obtained performance improvements.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Models, Theoretical*
  • Paintings*

Grants and funding

This work was supported in part by the Key Laboratory of E&M (Zhejiang University of Technology), Ministry of Education & Zhejiang Province(Grant No. EM 2016070101) The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.