Constant, Axel
A Path Towards Legal Autonomy: An interoperable and explainable approach to extracting, transforming, loading and computing legal information using large language models, expert systems and Bayesian networks
Constant, Axel, Westermann, Hannes, Wilson, Bryan, Kiefer, Alex, Hipolito, Ines, Pronovost, Sylvain, Swanson, Steven, Albarracin, Mahault, Ramstead, Maxwell J. D.
University of Sussex, School of Engineering and Informatics, Chichester I, CI-128, Falmer, Brighton, BN1 9RH, United Kingdom Acknowledgement This work was supported by a European Research Council Grant (XSCAPE) ERC-2020-SyG 951631 Abstract Legal autonomy -- the lawful activity of artificial intelligence agents -- can be achieved in one of two ways. It can be achieved either by imposing constraints on AI actors such as developers, deployers and users, and on AI resources such as data, or by imposing constraints on the range and scope of the impact that AI agents can have on the environment. The latter approach involves encoding extant rules concerning AI driven devices into the software of AI agents controlling those devices (e.g., encoding rules about limitations on zones of operations into the agent software of an autonomous drone device). This is a challenge since the effectivity of such an approach requires a method of extracting, loading, transforming and computing legal information that would be both explainable and legally interoperable, and that would enable AI agents to "reason" about the law. In this paper, we sketch a proof of principle for such a method using large language models (LLMs), expert legal systems known as legal decision paths, and Bayesian networks. We then show how the proposed method could be applied to extant regulation in matters of autonomous cars, such as the California Vehicle Code. Keywords Legal Reasoning; Large Language Models; Expert System; Bayesian Network; Explanability; Interoperability; Autonomous Vehicles 1. Two paths towards legal autonomy What does it mean to regulate artificial intelligence (AI), and how should we go about it? To answer this question, one must first be clear on what artificial intelligence is--at least, for the purposes of the law-- and then ask whether existing laws are sufficient for its regulation. This consensus is that the term "AI" refers to software (i) that is developed using computational techniques, (ii) that is able to make decisions that influence an environment, (iii) that is able to make such decisions autonomously, or partly autonomously, and (iv) that makes those decisions to align with a set of human defined objectives. In AI research, decision-making typically involves the ability to evaluate options, predict outcomes, and select an optimal or satisfactory course of action based on the data available and predefined objectives. This process is crucial in distinguishing AI systems from simple automated systems that operate based on a fixed set of rules without variation or learning ((Friedman & Frank, 1983; Gupta et al., 2022). Autonomy in AI is characterized by goal-oriented behaviour, where the system is not just reacting to inputs based on fixed rules but is actively pursuing objectives.
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
Butlin, Patrick, Long, Robert, Elmoznino, Eric, Bengio, Yoshua, Birch, Jonathan, Constant, Axel, Deane, George, Fleming, Stephen M., Frith, Chris, Ji, Xu, Kanai, Ryota, Klein, Colin, Lindsay, Grace, Michel, Matthias, Mudrik, Liad, Peters, Megan A. K., Schwitzgebel, Eric, Simon, Jonathan, VanRullen, Rufin
Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
Sources of Richness and Ineffability for Phenomenally Conscious States
Ji, Xu, Elmoznino, Eric, Deane, George, Constant, Axel, Dumas, Guillaume, Lajoie, Guillaume, Simon, Jonathan, Bengio, Yoshua
Conscious states (states that there is something it is like to be in) seem both rich or full of detail, and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state and ineffability corresponds to the amount of information lost at different stages of processing. We describe how attractor dynamics in working memory would induce impoverished recollections of our original experiences, how the discrete symbolic nature of language is insufficient for describing the rich and high-dimensional structure of experiences, and how similarity in the cognitive function of two individuals relates to improved communicability of their experiences to each other. While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation of the richness and ineffability of conscious experience: two important aspects that seem to be part of what makes qualitative character so puzzling.