A task-level emergency experience reuse method for freeway accidents onsite disposal with policy distilled reinforcement learning

Accid Anal Prev. 2023 Sep:190:107179. doi: 10.1016/j.aap.2023.107179. Epub 2023 Jun 27.

Abstract

A large number of freeway accident disposals are well-recorded by accident reports and surveillance videos, but it is not easy to get the emergency experience reused from past recorded accidents. To reuse emergency experience for better emergency decision-making, this paper proposed a knowledge-based experience transfer method to transfer task-level freeway accident disposal experience via multi-agent reinforcement learning algorithm with policy distillation. First, the Markov decision process is used to simulate the emergency decision-making process of multi-type freeway accident scene at the task level. Then, an adaptive knowledge transfer method named policy distilled multi-agent deep deterministic policy gradient (PD-MADDPG) algorithm is proposed to reuse experience from past freeway accident records to current accidents for fast decision-making and optimal onsite disposal. The performance of the proposed algorithm is evaluated on instantiated cases of freeway accidents that occurred on the freeway in Shaanxi Province, China. Aside from achieving better emergency decisions performance than various typical decision-making methods, the result shows decision maker with transferred knowledge owns 65.22%, 11.37%, 9.23%, 7.76% and 1.71% higher average reward than those without in the five studied cases, respectively. Indicating that the emergency experience transferred from past accidents contributes to fast emergency decision-making and optimal accident onsite disposal.

Keywords: Emergency decision making; Markov decision process; Multi-agent reinforcement learning; Traffic accidents; Transfer learning.

MeSH terms

  • Accidents, Traffic*
  • Algorithms*
  • China
  • Humans