Deep Reinforcement Learning for Edge Service Placement in Softwarized Industrial Cyber-Physical System

IEEE Trans Industr Inform. 2021 Aug;17(8):10.1109/tii.2020.3041713. doi: 10.1109/tii.2020.3041713.

Abstract

Future industrial cyber-physical system (CPS) devices are expected to request a large amount of delay-sensitive services that need to be processed at the edge of a network. Due to limited resources, service placement at the edge of the cloud has attracted significant attention. Although there are many methods of design schemes, the service placement problem in industrial CPS has not been well studied. Furthermore, none of existing schemes can optimize service placement, workload scheduling, and resource allocation under uncertain service demands. To address these issues, we first formulate a joint optimization problem of service placement, workload scheduling, and resource allocation in order to minimize service response delay. We then propose an improved deep Q-network (DQN)-based service placement algorithm. The proposed algorithm can achieve an optimal resource allocation by means of convex optimization where the service placement and workload scheduling decisions are assisted by means of DQN technology. The experimental results verify that the proposed algorithm, compared with existing algorithms, can reduce the average service response time by 8-10%.

Keywords: Deep reinforcement learning; edge cloud; industrial cyber-physical system (CPS); service placement.