[Dialogues with neural networks about the present and future of medical professions: risks and prospects]

Probl Sotsialnoi Gig Zdravookhranenniiai Istor Med. 2023 Oct;31(Special Issue 2):1097-1103. doi: 10.32687/0869-866X-2023-31-s2-1097-1103.
[Article in Russian]

Abstract

The article examines the risks to social perception of health professions in the context of the growing popularity of generative textual and visual neural networks. The article is consistent with the growing body of research that tends to see more risks of this technology, including for healthcare. The second focus of the article - the stereotypes about health professions - is critical, as they have been associated with negative outcomes for the healthcare system. The study primarily focuses on the perception of nursing professionals as most stereotyped by the public. Using uploads from the chatGPT, Kandinsky, and Shedevrum networks, the author has investigated the neural network's embedded perceptions of the nursing profession. Reproduction of certain stereotypes has been identified in all neural networks; the appearance stereotypes (gender, age, race, uniform) are more often shared by visual neural networks, while the function stereotypes (lack of autonomy, dependence on colleagues) - by text networks. Actually, we can assume the risks of a vicious circle, when the network is trained on the largely stereotypical data, and products based on its output increase the scale of stereotypical representations during subsequent training of the neural network. Developers' efforts to avoid reproducing stereotypes are acknowledged, however, the uploads show that they are not fully successful. To prevent the spread of perceptual distortions in neural networks, it is recommended to promote their sociohumanistic evaluation. A «prompt-experiment» is proposed as a mechanism for identifying such risks.

Keywords: medical profession; midjourney; neural networks; nurse; perception; СhatGPT.

Publication types

  • English Abstract

MeSH terms

  • Delivery of Health Care
  • Medicine*
  • Neural Networks, Computer
  • Stereotyping*