Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring

Electron Mark. 2022;32(4):2207-2233. doi: 10.1007/s12525-022-00600-9. Epub 2022 Dec 20.

Abstract

Assuming that potential biases of Artificial Intelligence (AI)-based systems can be identified and controlled for (e.g., by providing high quality training data), employing such systems to augment human resource (HR)-decision makers in candidate selection provides an opportunity to make selection processes more objective. However, as the final hiring decision is likely to remain with humans, prevalent human biases could still cause discrimination. This work investigates the impact of an AI-based system's candidate recommendations on humans' hiring decisions and how this relation could be moderated by an Explainable AI (XAI) approach. We used a self-developed platform and conducted an online experiment with 194 participants. Our quantitative and qualitative findings suggest that the recommendations of an AI-based system can reduce discrimination against older and female candidates but appear to cause fewer selections of foreign-race candidates. Contrary to our expectations, the same XAI approach moderated these effects differently depending on the context.

Supplementary information: The online version contains supplementary material available at 10.1007/s12525-022-00600-9.

Keywords: Bias; Discrimination; Ethics; Explainable AI; Hiring.