Deep Reinforcement Learning for Edge Caching with Mobility Prediction in Vehicular Networks

Sensors (Basel). 2023 Feb 3;23(3):1732. doi: 10.3390/s23031732.

Abstract

As vehicles are connected to the Internet, various services can be provided to users. However, if the requests of vehicle users are concentrated on the remote server, the transmission delay increases, and there is a high possibility that the delay constraint cannot be satisfied. To solve this problem, caching can be performed at a closer proximity to the user which in turn would reduce the latency by distributing requests. The road side unit (RSU) and vehicle can serve as caching nodes by providing storage space closer to users through a mobile edge computing (MEC) server and an on-board unit (OBU), respectively. In this paper, we propose a caching strategy for both RSUs and vehicles with the goal of maximizing the caching node throughput. The vehicles move at a greater speed; thus, if positions of the vehicles are predictable in advance, this helps to determine the location and type of content that has to be cached. By using the temporal and spatial characteristics of vehicles, we adopted a long short-term memory (LSTM) to predict the locations of the vehicles. To respond to time-varying content popularity, a deep deterministic policy gradient (DDPG) was used to determine the size of each piece of content to be stored in the caching nodes. Experiments in various environments have proven that the proposed algorithm performs better when compared to other caching methods in terms of the throughput of caching nodes, delay constraint satisfaction, and update cost.

Keywords: deep reinforcement learning; edge caching; long short-term memory; vehicular network.

Grants and funding

This research was supported by the Step 4 BK21 plus program and the research project (No. 2021R1F1A1047113) through the National Research Foundation (NRF) funded by the Ministry of Education of Korea and the Korea government (MSIT).