Finding key players in complex networks through deep reinforcement learning

Nat Mach Intell. 2020 Jun;2(6):317-324. doi: 10.1038/s42256-020-0177-2. Epub 2020 May 25.

Abstract

Finding an optimal set of nodes, called key players, whose activation (or removal) would maximally enhance (or degrade) certain network functionality, is a fundamental class of problems in network science1,2. Potential applications include network immunization3, epidemic control4, drug design5, and viral marketing6. Due to their general NP-hard nature, those problems typically cannot be solved by exact algorithms with polynomial time complexity7. Many approximate and heuristic strategies have been proposed to deal with specific application scenarios1,2,8-12. Yet, we still lack a unified framework to efficiently solve this class of problems. Here we introduce a deep reinforcement learning framework FINDER, which can be trained purely on small synthetic networks generated by toy models and then applied to a wide spectrum of influencer finding problems. Extensive experiments under various problem settings demonstrate that FINDER significantly outperforms existing methods in terms of solution quality. Moreover, it is several orders of magnitude faster than existing methods for large networks. The presented framework opens up a new direction of using deep learning techniques to understand the organizing principle of complex networks, which enables us to design more robust networks against both attacks and failures.