Influence of Multi-Modal Warning Interface on Takeover Efficiency of Autonomous High-Speed Train

Int J Environ Res Public Health. 2022 Dec 25;20(1):322. doi: 10.3390/ijerph20010322.

Abstract

As a large-scale public transport mode, the driving safety of high-speed rail has a profound impact on public health. In this study, we determined the most efficient multi-modal warning interface for automatic driving of a high-speed train and put forward suggestions for optimization and improvement. Forty-eight participants were selected, and a simulated 350 km/h high-speed train driving experiment equipped with a multi-modal warning interface was carried out. Then, the parameters of eye movement and behavior were analyzed by independent sample Kruskal-Wallis test and one-way analysis of variance. The results showed that the current level 3 warning visual interface of a high-speed train had the most abundant warning graphic information, but it failed to increase the takeover efficiency of the driver. The visual interface of the level 2 warning was more likely to attract the attention of drivers than the visual interface of the level 1 warning, but it still needs to be optimized in terms of the relevance of and guidance between graphic-text elements. The multi-modal warning interface had a faster response efficiency than the single-modal warning interface. The auditory-visual multi-modal interface had the highest takeover efficiency and was suitable for the most urgent (level 3) high-speed train warning. The introduction of an auditory interface could increase the efficiency of a purely visual interface, but the introduction of a tactile interface did not improve the efficiency. These findings can be used as a basis for the interface design of automatic driving high-speed trains and help improve the active safety of automatic driving high-speed trains, which is of great significance to protect the health and safety of the public.

Keywords: autonomous driving; high-speed train; multi-modal interface; takeover efficiency.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Accidents, Traffic
  • Attention
  • Automobile Driving*
  • Eye Movements
  • Humans
  • Reaction Time
  • Touch
  • Transportation

Grants and funding

This work was supported by the National Natural Science Foundation of China (grant number 52175253); The MOE Layout Foundation of Humanities and Social Sciences (grant number 19YJA760094); Project of Sichuan Natural Science Foundation (Youth Science Foundation) (grant number 22NSFSC0865); Project of Sichuan Provincial Key Laboratory of digital media art, Sichuan Conservatory of music (grant number 22DMAKL02); degree and postgraduate education and teaching reform project of Southwest Jiaotong University (grant number YJG5-2022-Y038); and China Academy of Fine Arts Creative Design and Intelligent Laboratory Open Fund Project (Supported by Design-AI lab of China Academy of Art) General Project (grant number CAADAI2022B002).