Emotions play a significant role in human relations, decision-making, and the motivation to act on those decisions. There are ongoing attempts to use artificial intelligence (AI) to read human emotions, and to predict human behavior or actions that may follow those emotions. However, a person's emotions cannot be easily identified, measured, and evaluated by others, including automated machines and algorithms run by AI. The ethics of emotional AI is under research and this study has examined the emotional variables as well as the perception of emotional AI in two large random groups of college students in an international university in Japan, with a heavy representation of Japanese, Indonesian, Korean, Chinese, Thai, Vietnamese, and other Asian nationalities. Surveys with multiple close-ended questions and an open-ended essay question regarding emotional AI were administered for quantitative and qualitative analysis, respectively. The results demonstrate how ethically questionable results may be obtained through affective computing and by searching for correlations in a variety of factors in collected data to classify individuals into certain categories and thus aggravate bias and discrimination. Nevertheless, the qualitative study of students' essays shows a rather optimistic view over the use of emotional AI, which helps underscore the need to increase awareness about the ethical pitfalls of AI technologies in the complex field of human emotions.
Keywords: AI bias; AI discrimination; AI ethics; Affective computing; Artificial intelligence; Emotional AI.
© National University of Singapore and Springer Nature Singapore Pte Ltd. 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.