Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing

Sensors (Basel). 2022 Dec 7;22(24):9595. doi: 10.3390/s22249595.

Abstract

Vehicular edge computing (VEC) is a promising technology for supporting computation-intensive vehicular applications with low latency at the network edges. Vehicles offload their tasks to VEC servers (VECSs) to improve the quality of service (QoS) of the applications. However, the high density of vehicles and VECSs and the mobility of vehicles increase channel interference and deteriorate the channel condition, resulting in increased power consumption and latency. Therefore, we proposed a task offloading method with the power control considering dynamic channel interference and conditions in a vehicular environment. The objective is to maximize the throughput of a VEC system under the power constraints of a vehicle. We leverage deep reinforcement learning (DRL) to achieve superior performance in complex environments and high-dimensional inputs. However, most conventional methods adopted the multi-agent DRL approach that makes decisions using only local information, which can result in poor performance, while single-agent DRL approaches require excessive data exchanges because data needs to be concentrated in an agent. To address these challenges, we adopt a federated deep reinforcement learning (FL) method that combines centralized and distributed approaches to the deep deterministic policy gradient (DDPG) framework. The experimental results demonstrated the effectiveness and performance of the proposed method in terms of the throughput and queueing delay of vehicles in dynamic vehicular networks.

Keywords: deep deterministic policy gradient; federated deep reinforcement learning; power control; task offloading; vehicular edge computing.

Grants and funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) (No. 2021R1F1A1047113).