Hipolito, Ines
Modeling Sustainable Resource Management using Active Inference
Albarracin, Mahault, Hipolito, Ines, Raffa, Maria, Kinghorn, Paul
Active inference helps us simulate adaptive behavior and decision-making in biological and artificial agents. Building on our previous work exploring the relationship between active inference, well-being, resilience, and sustainability, we present a computational model of an agent learning sustainable resource management strategies in both static and dynamic environments. The agent's behavior emerges from optimizing its own well-being, represented by prior preferences, subject to beliefs about environmental dynamics. In a static environment, the agent learns to consistently consume resources to satisfy its needs. In a dynamic environment where resources deplete and replenish based on the agent's actions, the agent adapts its behavior to balance immediate needs with long-term resource availability. This demonstrates how active inference can give rise to sustainable and resilient behaviors in the face of changing environmental conditions. We discuss the implications of our model, its limitations, and suggest future directions for integrating more complex agent-environment interactions. Our work highlights active inference's potential for understanding and shaping sustainable behaviors.
A Path Towards Legal Autonomy: An interoperable and explainable approach to extracting, transforming, loading and computing legal information using large language models, expert systems and Bayesian networks
Constant, Axel, Westermann, Hannes, Wilson, Bryan, Kiefer, Alex, Hipolito, Ines, Pronovost, Sylvain, Swanson, Steven, Albarracin, Mahault, Ramstead, Maxwell J. D.
University of Sussex, School of Engineering and Informatics, Chichester I, CI-128, Falmer, Brighton, BN1 9RH, United Kingdom Acknowledgement This work was supported by a European Research Council Grant (XSCAPE) ERC-2020-SyG 951631 Abstract Legal autonomy -- the lawful activity of artificial intelligence agents -- can be achieved in one of two ways. It can be achieved either by imposing constraints on AI actors such as developers, deployers and users, and on AI resources such as data, or by imposing constraints on the range and scope of the impact that AI agents can have on the environment. The latter approach involves encoding extant rules concerning AI driven devices into the software of AI agents controlling those devices (e.g., encoding rules about limitations on zones of operations into the agent software of an autonomous drone device). This is a challenge since the effectivity of such an approach requires a method of extracting, loading, transforming and computing legal information that would be both explainable and legally interoperable, and that would enable AI agents to "reason" about the law. In this paper, we sketch a proof of principle for such a method using large language models (LLMs), expert legal systems known as legal decision paths, and Bayesian networks. We then show how the proposed method could be applied to extant regulation in matters of autonomous cars, such as the California Vehicle Code. Keywords Legal Reasoning; Large Language Models; Expert System; Bayesian Network; Explanability; Interoperability; Autonomous Vehicles 1. Two paths towards legal autonomy What does it mean to regulate artificial intelligence (AI), and how should we go about it? To answer this question, one must first be clear on what artificial intelligence is--at least, for the purposes of the law-- and then ask whether existing laws are sufficient for its regulation. This consensus is that the term "AI" refers to software (i) that is developed using computational techniques, (ii) that is able to make decisions that influence an environment, (iii) that is able to make such decisions autonomously, or partly autonomously, and (iv) that makes those decisions to align with a set of human defined objectives. In AI research, decision-making typically involves the ability to evaluate options, predict outcomes, and select an optimal or satisfactory course of action based on the data available and predefined objectives. This process is crucial in distinguishing AI systems from simple automated systems that operate based on a fixed set of rules without variation or learning ((Friedman & Frank, 1983; Gupta et al., 2022). Autonomy in AI is characterized by goal-oriented behaviour, where the system is not just reacting to inputs based on fixed rules but is actively pursuing objectives.
Enactive Artificial Intelligence: Subverting Gender Norms in Robot-Human Interaction
Hipolito, Ines, Winkle, Katie, Lie, Merete
This paper introduces Enactive Artificial Intelligence (eAI) as an intersectional gender-inclusive stance towards AI. AI design is an enacted human sociocultural practice that reflects human culture and values. Unrepresentative AI design could lead to social marginalisation. Section 1, drawing from radical enactivism, outlines embodied cultural practices. In Section 2, explores how intersectional gender intertwines with technoscience as a sociocultural practice. Section 3 focuses on subverting gender norms in the specific case of Robot-Human Interaction in AI. Finally, Section 4 identifies four vectors of ethics: explainability, fairness, transparency, and auditability for adopting an intersectionality-inclusive stance in developing gender-inclusive AI and subverting existing gender norms in robot design.