Apples, oranges, robots: four misunderstandings in today's debate on the legal status of AI systems

Philos Trans A Math Phys Eng Sci. 2018 Oct 15;376(2133):20180168. doi: 10.1098/rsta.2018.0168.

Abstract

Scholars have increasingly discussed the legal status(es) of robots and artificial intelligence (AI) systems over the past three decades; however, the 2017 resolution of the EU parliament on the 'electronic personhood' of AI robots has reignited and even made current debate ideological. Against this background, the aim of the paper is twofold. First, the intent is to show how often today's discussion on the legal status(es) of AI systems leads to different kinds of misunderstanding that regard both the legal personhood of AI robots and their status as accountable agents establishing rights and obligations in contracts and business law. Second, the paper claims that whether or not the legal status of AI systems as accountable agents in civil--as opposed to criminal--law may make sense is an empirical issue, which should not be 'politicized'. Rather, a pragmatic approach seems preferable, as shown by methods of competitive federalism and legal experimentation. In the light of the classical distinction between primary rules and secondary rules of the law, examples of competitive federalism and legal experimentation aim to show how the secondary rules of the law can help us understanding what kind of primary rules we may wish for our AI robots.This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.

Keywords: accountability; artificial intelligence; legal experimentation; liability; robotics.