Media Forensic Considerations of the Usage of Artificial Intelligence Using the Example of DeepFake Detection

J Imaging. 2024 Feb 9;10(2):46. doi: 10.3390/jimaging10020046.

Abstract

In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first document to be turned into European Law. This initiative focuses on turning AI systems in decision support systems (human-in-the-loop and human-in-command), where the human operator remains in control of the system. While this supposedly solves accountability issues, it includes, on one hand, the necessary human-computer interaction as a potential new source of errors; on the other hand, it is potentially a very effective approach for decision interpretation and verification. This paper discusses the necessary requirements for high-risk AI systems once the AIA comes into force. Particular attention is paid to the opportunities and limitations that result from the decision support system and increasing the explainability of the system. This is illustrated using the example of the media forensic task of DeepFake detection.

Keywords: Artificial Intelligence Act (AIA); DeepFake; DeepFake detection; artificial intelligence (AI); explainable AI (xAI); forensics; human–AI interfaces; system causability scale (SCS).