Comparing the Pretrained Models of Source Code by Re-pretraining Under a Unified Setup

IEEE Trans Neural Netw Learn Syst. 2023 Sep 11:PP. doi: 10.1109/TNNLS.2023.3308595. Online ahead of print.

Abstract

Recent years have seen the successful application of large pretrained models of source code (CodePTMs) to code representation learning, which have taken the field of software engineering (SE) from task-specific solutions to task-agnostic generic models. By the remarkable results, CodePTMs are seen as a promising direction in both academia and industry. While a number of CodePTMs have been proposed, they are often not directly comparable because they differ in experimental setups such as pretraining dataset, model size, evaluation tasks, and datasets. In this article, we first review the experimental setup used in previous work and propose a standardized setup to facilitate fair comparisons among CodePTMs to explore the impacts of their pretraining tasks. Then, under the standardized setup, we re-pretrain CodePTMs using the same model architecture, input modalities, and pretraining tasks, as they declared and fine-tune each model on each evaluation SE task for evaluating. Finally, we present the experimental results and make a comprehensive discussion on the relative strength and weakness of different pretraining tasks with respect to each SE task. We hope our view can inspire and advance the future study of more powerful CodePTMs.