Safe, Efficient, and Comfortable Autonomous Driving Based on Cooperative Vehicle Infrastructure System

Int J Environ Res Public Health. 2023 Jan 3;20(1):893. doi: 10.3390/ijerph20010893.

Abstract

Traffic crashes, heavy congestion, and discomfort often occur on rough pavements due to human drivers' imperfect decision-making for vehicle control. Autonomous vehicles (AVs) will flood onto urban roads to replace human drivers and improve driving performance in the near future. With the development of the cooperative vehicle infrastructure system (CVIS), multi-source road and traffic information can be collected by onboard or roadside sensors and integrated into a cloud. The information is updated and used for decision-making in real-time. This study proposes an intelligent speed control approach for AVs in CVISs using deep reinforcement learning (DRL) to improve safety, efficiency, and ride comfort. First, the irregular and fluctuating road profiles of rough pavements are represented by maximum comfortable speeds on segments via vertical comfort evaluation. A DRL-based speed control model is then designed to learn safe, efficient, and comfortable car-following behavior based on road and traffic information. Specifically, the model is trained and tested in a stochastic environment using data sampled from 1341 car-following events collected in California and 110 rough pavements detected in Shanghai. The experimental results show that the DRL-based speed control model can improve computational efficiency, driving efficiency, longitudinal comfort, and vertical comfort in cars by 93.47%, 26.99%, 58.33%, and 6.05%, respectively, compared to a model predictive control-based adaptive cruise control. The results indicate that the proposed intelligent speed control approach for AVs is effective on rough pavements and has excellent potential for practical application.

Keywords: autonomous vehicle; deep reinforcement learning; ride comfort; safety; speed control.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Accidents, Traffic* / prevention & control
  • Automobile Driving*
  • Automobiles
  • China
  • Humans
  • Learning
  • Reinforcement, Psychology

Grants and funding

This research was funded by the National Key R&D Program of China under Grant 2021YFB1600403, and in part by the Innovation Program of Shanghai Municipal Education Commission under Grant 2021-01-07-00-07-E00092, and in part by the Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX0100.