Learning Assurance Analysis for Further Certification Process of Machine Learning Techniques: Case-Study Air Traffic Conflict Detection Predictor

Sensors (Basel). 2022 Oct 10;22(19):7680. doi: 10.3390/s22197680.

Abstract

Designing and developing artificial intelligence (AI)-based systems that can be trusted justifiably is one of the main issues aviation must face in the coming years. European Union Aviation Safety Agency (EASA) has developed a user guide that could be potentially transformed as means of compliance for future AI-based regulation. Designers and developers must understand how the learning assurance process of any machine learning (ML) model impacts trust. ML is a narrow branch of AI that uses statistical models to perform predictions. This work deals with the learning assurance process for ML-based systems in the field of air traffic control. A conflict detection tool has been developed to identify separation infringements among aircraft pairs, and the ML algorithm used for classification and regression was extreme gradient boosting. This paper analyses the validity and adaptability of EASA W-shaped methodology for ML-based systems. The results have identified the lack of the EASA W-shaped methodology in time-dependent analysis, by showing how time can impact ML algorithms designed in the case where no time requirements are considered. Another meaningful conclusion is, for systems that depend highly on when the prediction is made, classification and regression metrics cannot be one-size-fits-all because they vary over time.

Keywords: air transport; conflict detection; learning assurance; machine learning; trustworthiness.

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Aviation* / methods
  • Certification
  • Machine Learning