A Path Towards Legal Autonomy: An interoperable and explainable approach to extracting, transforming, loading and computing legal information using large language models, expert systems and Bayesian networks
Constant, Axel, Westermann, Hannes, Wilson, Bryan, Kiefer, Alex, Hipolito, Ines, Pronovost, Sylvain, Swanson, Steven, Albarracin, Mahault, Ramstead, Maxwell J. D.
–arXiv.org Artificial Intelligence
University of Sussex, School of Engineering and Informatics, Chichester I, CI-128, Falmer, Brighton, BN1 9RH, United Kingdom Acknowledgement This work was supported by a European Research Council Grant (XSCAPE) ERC-2020-SyG 951631 Abstract Legal autonomy -- the lawful activity of artificial intelligence agents -- can be achieved in one of two ways. It can be achieved either by imposing constraints on AI actors such as developers, deployers and users, and on AI resources such as data, or by imposing constraints on the range and scope of the impact that AI agents can have on the environment. The latter approach involves encoding extant rules concerning AI driven devices into the software of AI agents controlling those devices (e.g., encoding rules about limitations on zones of operations into the agent software of an autonomous drone device). This is a challenge since the effectivity of such an approach requires a method of extracting, loading, transforming and computing legal information that would be both explainable and legally interoperable, and that would enable AI agents to "reason" about the law. In this paper, we sketch a proof of principle for such a method using large language models (LLMs), expert legal systems known as legal decision paths, and Bayesian networks. We then show how the proposed method could be applied to extant regulation in matters of autonomous cars, such as the California Vehicle Code. Keywords Legal Reasoning; Large Language Models; Expert System; Bayesian Network; Explanability; Interoperability; Autonomous Vehicles 1. Two paths towards legal autonomy What does it mean to regulate artificial intelligence (AI), and how should we go about it? To answer this question, one must first be clear on what artificial intelligence is--at least, for the purposes of the law-- and then ask whether existing laws are sufficient for its regulation. This consensus is that the term "AI" refers to software (i) that is developed using computational techniques, (ii) that is able to make decisions that influence an environment, (iii) that is able to make such decisions autonomously, or partly autonomously, and (iv) that makes those decisions to align with a set of human defined objectives. In AI research, decision-making typically involves the ability to evaluate options, predict outcomes, and select an optimal or satisfactory course of action based on the data available and predefined objectives. This process is crucial in distinguishing AI systems from simple automated systems that operate based on a fixed set of rules without variation or learning ((Friedman & Frank, 1983; Gupta et al., 2022). Autonomy in AI is characterized by goal-oriented behaviour, where the system is not just reacting to inputs based on fixed rules but is actively pursuing objectives.
arXiv.org Artificial Intelligence
Mar-27-2024
- Country:
- Europe > United Kingdom (1.00)
- North America > United States
- California (0.48)
- Genre:
- Research Report (0.41)
- Industry:
- Law (1.00)
- Transportation > Ground
- Road (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning > Learning Graphical Models
- Directed Networks > Bayesian Learning (1.00)
- Natural Language > Large Language Model (1.00)
- Representation & Reasoning
- Agents (1.00)
- Expert Systems (1.00)
- Rule-Based Reasoning (1.00)
- Uncertainty > Bayesian Inference (1.00)
- Robots > Autonomous Vehicles (1.00)
- Information Technology > Artificial Intelligence