Intelligent Task Dispatching and Scheduling Using a Deep Q-Network in a Cluster Edge Computing System

Sensors (Basel). 2022 May 28;22(11):4098. doi: 10.3390/s22114098.

Abstract

Recently, intelligent IoT applications based on artificial intelligence (AI) have been deployed with mobile edge computing (MEC). Intelligent IoT applications demand more computing resources and lower service latencies for AI tasks in dynamic MEC environments. Thus, in this paper, considering the resource scalability and resource optimization of edge computing, an intelligent task dispatching model using a deep Q-network, which can efficiently use the computing resource of edge nodes is proposed to maximize the computation ability of the cluster edge system, which consists of multiple edge nodes. The cluster edge system can be implemented with the Kubernetes technology. The objective of the proposed model is to minimize the average response time of tasks offloaded to the edge computing system and optimize the resource allocation for computing the offloaded tasks. For this, we first formulate the optimization problem of resource allocation as a Markov decision process (MDP) and adopt a deep reinforcement learning technology to solve this problem. Thus, the proposed intelligent task dispatching model is designed based on a deep Q-network (DQN) algorithm to update the task dispatching policy. The simulation results show that the proposed model archives a better convergence performanc in terms of the average completion time of all offloaded tasks, than existing task dispatching methods, such as the Random Method, Least Load Method and Round-Robin Method, and has a better task completion rate than the existing task dispatching method when using the same resources as the cluster edge system.

Keywords: clustering; deep reinforcement learning; edge computing; task offloading.