Dehumanisation of 'Outgroups' on Facebook and Twitter: towards a framework for assessing online hate organisations and actors

SN Soc Sci. 2021;1(9):238. doi: 10.1007/s43545-021-00240-4. Epub 2021 Sep 22.

Abstract

Whilst preventing dehumanization of outgroups is a widely accepted goal in the field of countering violent extremism, current algorithms by social media platforms are focused on detecting individual samples through explicit language. This study tests whether explicit dehumanising language directed at Muslims is detected by tools of Facebook and Twitter; and further, whether the presence of explicit dehumanising terms is necessary to successfully dehumanise 'the other'-in this case, Muslims. Answering both these questions in the negative, this analysis extracts universally useful analytical tools that could be used together to consistently and competently assess actors using dehumanisation as a measure, even where that dehumanisation is cumulative and grounded in discourse, rather than explicit language. The output of one prolific actor identified by researchers as an anti-Muslim hate organisation, and four (4) other anti-Muslim actors, are discursively analysed, and impacts considered through the comments they elicit. Whilst this study focuses on material gathered with respect to anti-Muslim discourses, the findings are relevant to a range of contexts where groups are dehumanised on the basis of race or other protected attribute. This study suggests it is possible to predict aggregate harm by specific actors from a range of samples of borderline content that each might be difficult to discern as harmful individually.

Keywords: Content moderation; Dangerous organisations; Dehumanisation; Digital platform policy; Out-groups; Right wing extremism.