Deep reinforcement learning-based approach for rumor influence minimization in social networks

Appl Intell (Dordr). 2023 Apr 4:1-18. doi: 10.1007/s10489-023-04555-y. Online ahead of print.

Abstract

Spreading malicious rumors on social networks such as Facebook, Twitter, and WeChat can trigger political conflicts, sway public opinion, and cause social disruption. A rumor can spread rapidly across a network and can be difficult to control once it has gained traction.Rumor influence minimization (RIM) is a central problem in information diffusion and network theory that involves finding ways to minimize rumor spread within a social network. Existing research on the RIM problem has focused on blocking the actions of influential users who can drive rumor propagation. These traditional static solutions do not adequately capture the dynamics and characteristics of rumor evolution from a global perspective. A deep reinforcement learning strategy that takes into account a wide range of factors may be an effective way of addressing the RIM challenge. This study introduces the dynamic rumor influence minimization (DRIM) problem, a step-by-step discrete time optimization method for controlling rumors. In addition, we provide a dynamic rumor-blocking approach, namely RLDB, based on deep reinforcement learning. First, a static rumor propagation model (SRPM) and a dynamic rumor propagation model (DRPM) based on of independent cascade patterns are presented. The primary benefit of the DPRM is that it can dynamically adjust the probability matrix according to the number of individuals affected by rumors in a social network, thereby improving the accuracy of rumor propagation simulation. Second, the RLDB strategy identifies the users to block in order to minimize rumor influence by observing the dynamics of user states and social network architectures. Finally, we assess the blocking model using four real-world datasets with different sizes. The experimental results demonstrate the superiority of the proposed approach on heuristics such as out-degree(OD), betweenness centrality(BC), and PageRank(PR).

Keywords: Deep Q-network; Deep reinforcement learning; Online social networks; Rumor influence minimization.