Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning

Sensors (Basel). 2023 Jan 3;23(1):548. doi: 10.3390/s23010548.

Abstract

A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these systems, a key challenge is to solve, in a timely manner, the combinatorial optimization problem involved in finding the best way to allocate the tasks to the available nodes (i.e., the task allocation) taking into account aspects such as the computational costs of the tasks and the computational capacity of the nodes. This problem is not trivial and there is no known polynomial time algorithm to find the optimal solution. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems and, in this work, we explore the application of such approaches to the task allocation problem in CADESs. We first discuss the potential advantages of using a DRL-based approach over several heuristic-based approaches to allocate tasks in CADESs and we then demonstrate how a DRL-based approach can achieve similar results for the best performing heuristic in terms of optimality of the allocation, while requiring less time to generate such allocation.

Keywords: Deep Reinforcement Learning; Distributed Embedded Systems; Machine Learning; combinatorial optimization.

Grants and funding

This work was supported by grant TEC2015-70313-R (Spanish Ministerio de Economía y Competividad), by FEDER funding, by grant PID2021-124348OB-I00 funded by MCIN/AEI/10.13039/ 501100011033/ERDF, EU.