Legal, ethical, and wider implications of suicide risk detection systems in social media platforms

J Law Biosci. 2021 Jul 15;8(1):lsab021. doi: 10.1093/jlb/lsab021. eCollection 2021 Jan-Jun.

Abstract

Suicide remains a problem of public health importance worldwide. Cognizant of the emerging links between social media use and suicide, social media platforms, such as Facebook, have developed automated algorithms to detect suicidal behavior. While seemingly a well-intentioned adjunct to public health, there are several ethical and legal concerns to this approach. For example, the role of consent to use individual data in this manner has only been given cursory attention. Social media users may not even be aware that their social media posts, movements, and Internet searches are being analyzed by non-health professionals, who have the decision-making ability to involve law enforcement upon suspicion of potential self-harm. Failure to obtain such consent presents privacy risks and can lead to exposure and wider potential harms. We argue that Facebook's practices in this area should be subject to well-established protocols. These should resemble those utilized in the field of human subjects research, which upholds standardized, agreed-upon, and well-recognized ethical practices based on generations of precedent. Prior to collecting sensitive data from social media users, an ethical review process should be carried out. The fiduciary framework seems to resonate with the emergent roles and obligations of social media platforms to accept more responsibility for the content being shared.

Keywords: AI; algorithms; consent; ethics; legal implications; privacy; social media platforms; suicide risk detection.