Semisupervised Network Embedding With Differentiable Deep Quantization

IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4791-4802. doi: 10.1109/TNNLS.2021.3129280. Epub 2023 Aug 4.

Abstract

Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many downstream network analytics tasks. For large networks, the trained embeddings often require a significant amount of space to store, making storage and processing a challenge. Building on our previous work on semisupervised network embedding, we develop d-SNEQ, a differentiable DNN-based quantization method for network embedding. d-SNEQ incorporates a rank loss to equip the learned quantization codes with rich high-order information and is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed. We also propose a new evaluation metric, path prediction, to fairly and more directly evaluate the model performance on the preservation of high-order information. Our evaluation on four real-world networks of diverse characteristics shows that d-SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, path prediction, node classification, and node recommendation while being far more space- and time-efficient.