Towards human-centred standards for legal help AI

Philos Trans A Math Phys Eng Sci. 2024 Apr 15;382(2270):20230157. doi: 10.1098/rsta.2023.0157. Epub 2024 Feb 26.

Abstract

As more groups consider how AI may be used in the legal sector, this paper envisions how companies and policymakers can prioritize the perspective of community members as they design AI and policies around it. It presents findings of structured interviews and design sessions with community members, in which they were asked about whether, how, and why they would use AI tools powered by large language models to respond to legal problems like receiving an eviction notice. The respondents reviewed options for simple versus complex interfaces for AI tools, and expressed how they would want to engage with an AI tool to resolve a legal problem. These empirical findings provide directions that can counterbalance legal domain experts' proposals about the public interest around AI, as expressed by attorneys, court officials, advocates and regulators. By hearing directly from community members about how they want to use AI for civil justice tasks, what risks concern them, and the value they would find in different kinds of AI tools, this research can ensure that people's points of view are understood and prioritized, rather than only domain experts' assertions about people's needs and preferences around legal help AI. This article is part of the theme issue 'A complexity science approach to law and governance'.

Keywords: access to justice; artificial intelligence; legal design; legal technology; participatory policymaking.