Sequence-to-sequence pretraining for a less-resourced Slovenian language

Front Artif Intell. 2023 Mar 28:6:932519. doi: 10.3389/frai.2023.932519. eCollection 2023.

Abstract

Introduction: Large pretrained language models have recently conquered the area of natural language processing. As an alternative to predominant masked language modeling introduced in BERT, the T5 model has introduced a more general training objective, namely sequence to sequence transformation, which more naturally fits text generation tasks. The monolingual variants of T5 models have been limited to well-resourced languages, while the massively multilingual T5 model supports 101 languages.

Methods: We trained two different-sized T5-type sequence-to-sequence models for morphologically rich Slovene language with much fewer resources. We analyzed the behavior of new models on 11 tasks, eight classification ones (named entity recognition, sentiment classification, lemmatization, two question answering tasks, two natural language inference tasks, and a coreference resolution task), and three text generation tasks (text simplification and two summarization tasks on different datasets). We compared the new SloT5 models with the multilingual mT5 model, multilingual mBART-50 model, and with four encoder BERT-like models: multilingual BERT, multilingual XLM-RoBERTa, trilingual Croatian-Slovene-English BERT, and monolingual Slovene RoBERTa model.

Results: Concerning the classification tasks, the SloT5 models mostly lag behind the monolingual Slovene SloBERTa model. However, these models are helpful for generative tasks and provide several useful results. In general, the size of models matters, and currently, there is not enough training data for Slovene for successful pretraining of large models.

Discussion: While the results are obtained on Slovene, we believe that they may generalize to other less-resourced languages, where such models will be built. We make the training and evaluation code, as well as the trained models, publicly available.

Keywords: Slovene; T5 model; low-resource languages; natural language processing; pretrained language models; sequence-to-sequence models; transformers.

Grants and funding

This work was partially supported by the Slovenian Research Agency (ARRS) core research programme P6-0411 and projects J6-2581, J7-3159, and J1-2480, as well as the Ministry of Culture of Republic of Slovenia through project Development of Slovene in Digital Environment (RSDO).