Diffusion models in text generation: a survey

PeerJ Comput Sci. 2024 Feb 23:10:e1905. doi: 10.7717/peerj-cs.1905. eCollection 2024.

Abstract

Diffusion models are a kind of math-based model that were first applied to image generation. Recently, they have drawn wide interest in natural language generation (NLG), a sub-field of natural language processing (NLP), due to their capability to generate varied and high-quality text outputs. In this article, we conduct a comprehensive survey on the application of diffusion models in text generation. We divide text generation into three parts (conditional, unconstrained, and multi-mode text generation, respectively) and provide a detailed introduction. In addition, considering that autoregressive-based pre-training models (PLMs) have recently dominated text generation, we conduct a detailed comparison between diffusion models and PLMs in multiple dimensions, highlighting their respective advantages and limitations. We believe that integrating PLMs into diffusion is a valuable research avenue. We also discuss current challenges faced by diffusion models in text generation and propose potential future research directions, such as improving sampling speed to address scalability issues and exploring multi-modal text generation. By providing a comprehensive analysis and outlook, this survey will serve as a valuable reference for researchers and practitioners interested in utilizing diffusion models for text generation tasks.

Keywords: Diffusion models; Natural language generation; Text generation.

Grants and funding

This work was supported by the National Natural Science Foundation of China (No. 62072409, 62176234). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.