Deep transfer learning with multimodal embedding to tackle cold-start and sparsity issues in recommendation system

PLoS One. 2022 Aug 25;17(8):e0273486. doi: 10.1371/journal.pone.0273486. eCollection 2022.

Abstract

Recommender systems (RSs) have become increasingly vital in the modern information era and connected economy. They play a key role in business operations by generating personalized suggestions and minimizing information overload. However, the performance of traditional RSs is limited by data sparseness and cold-start issues. Though deep learning-based recommender systems (DLRSs) are very popular, they underperform when considering rating matrices with sparse entries. Despite their performance improvements, DLRSs also suffer from data sparsity, cold start, serendipity, and generalizability issues. We propose a multistage model that uses multimodal data embedding and deep transfer learning for effective and personalized product recommendations, and is designed to overcome data sparsity and cold-start issues. The proposed model includes two phases. In the first-offline-phase, a deep learning technique is implemented to learn hidden features from a large image dataset (targeting new item cold start), and a multimodal data embedding is used to produce dense user feature and item feature vectors (targeting user cold start). This phase produces three different similarity matrices that are used as inputs for the second-online-phase to generate a list of top-n relevant items for a target user. We analyzed the accuracy and effectiveness of the proposed model against the existing baseline RSs using a Brazilian E-commerce dataset. The results show that our model scored 0.5882 for MAE and 0.4011 for RMSE which is lower than baseline RSs which indicates that the model achieved an improved accuracy and was able to minimize the typical cold start and data sparseness issues during the recommendation process.

MeSH terms

  • Algorithms*
  • Brazil
  • Commerce*
  • Machine Learning

Grants and funding

The author(s) received no specific funding for this work.