CNN and Attention-Based Joint Source Channel Coding for Semantic Communications in WSNs

Sensors (Basel). 2024 Feb 1;24(3):957. doi: 10.3390/s24030957.

Abstract

Wireless Sensor Networks (WSNs) have emerged as an efficient solution for numerous real-time applications, attributable to their compactness, cost-effectiveness, and ease of deployment. The rapid advancement of 5G technology and mobile edge computing (MEC) in recent years has catalyzed the transition towards large-scale deployment of WSN devices. However, the resulting data proliferation and the dynamics of communication environments introduce new challenges for WSN communication: (1) ensuring robust communication in adverse environments and (2) effectively alleviating bandwidth pressure from massive data transmission. In response to the aforementioned challenges, this paper proposes a semantic communication solution. Specifically, considering the limited computational and storage resources of WSN devices, we propose a flexible Attention-based Adaptive Coding (AAC) module. This module integrates window and channel attention mechanisms, dynamically adjusts semantic information in response to the current channel state, and facilitates adaptation of a single model across various Signal-to-Noise Ratio (SNR) environments. Furthermore, to validate the effectiveness of this approach, the paper introduces an end-to-end Joint Source Channel Coding (JSCC) scheme for image semantic communication, employing the AAC module. Experimental results demonstrate that the proposed scheme surpasses existing deep JSCC schemes across datasets of varying resolutions; furthermore, they validate the efficacy of the proposed AAC module, which is capable of dynamically adjusting critical information according to the current channel state. This enables the model to be trained over a range of SNRs and obtain better results.

Keywords: attention mechanism; deep neural network; joint source channel coding; mobile edge computing; semantic communications; wireless sensor networks.

Grants and funding

This research was funded by the National Natural Science Foundation of China under Grant 62201101; the Project funded by the China Postdoctoral Science Foundation under Grant 2022M720020; the Natural Science Foundation of Chongqing, China under Grant cstc2021jcyj-msxmX0458; and the Chongqing Technology Innovation and Application Development Special Key Project under Grant CSTB2022TIAD-KPX0059.