Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence

AI Soc. 2022 Jan 11:1-14. doi: 10.1007/s00146-021-01382-y. Online ahead of print.

Abstract

This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation-state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should "refrain in their international relations from the threat or use of force", while allowing for UN Security Council-endorsed military measures and self-defense. As UN Member States no longer declare war on each other, instead, only 'international armed conflicts' occur. However, costly interstate conflicts, both hot and cold and tantamount to wars, still take place. Further, a New Cold War between AI superpowers looms. An ASI-directed/enabled future conflict could trigger total war, including nuclear conflict, and is therefore high risk. Via conforming instrumentalism, an international relations theory, we advocate risk reduction by optimising peace through a Universal Global Peace Treaty (UGPT), contributing towards the ending of existing wars and prevention of future wars, as well as a Cyberweapons and Artificial Intelligence Convention. This strategy could influence state actors, including those developing ASIs, or an agential ASI, particularly if it values conforming instrumentalism and peace.

Supplementary information: The online version contains supplementary material available at 10.1007/s00146-021-01382-y.

Keywords: AI arms race; Artificial superintelligence; Conforming instrumentalism; Existential risk; International relations; Peace.