normative system
Interview with Gillian Hadfield: Normative infrastructure for AI alignment
During the 33rd International Joint Conference on Artificial Intelligence (IJCAI), held in Jeju, I had the opportunity to meet with one of the keynote speakers, Gillian Hadfield. We spoke about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems. Transcript: Note: the transcript has been lightly edited for clarity. This is an interview with Professor Gillian Hadfield who was a keynote speaker at IJCAI 2024. She gave a very insightful talk about normative infrastructures and how they can guide our search for AI alignment. Kumar Kshitij Patel (KKP): Could you talk a bit about your background and career trajectory? I want our readers to understand how much interdisciplinary work you've done over the years. Gillian Hadfield (GH): I did a PhD in economics and a law degree, a JD, at Stanford, originally motivated by wanting to think about the big questions about the world. So I read John Rawls' theory of justice when I was an undergraduate, and those are the big questions: how do we organize the world and just institutions, but I was very interested in using more formal methods and social scientific approaches. That's why I decided to do that joint degree. So, this is in the 1980s, and in the early days of starting to use a lot of game theory. I studied information theory, a student of Canaro and Paul Milgram at the economics department at Stanford. I did work on contract theory, bargaining theory, but I was still very interested in going to law school, not to practice law, but to learn about legal institutions and how those work. I was a member of this emerging area of law and economics early in my career, which of course, was interdisciplinary, using economics to think about law and legal institutions.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > United States > Michigan (0.04)
- (2 more...)
- Law (1.00)
- Education > Educational Setting > Higher Education (0.69)
- Education > Curriculum > Subject-Specific Education (0.55)
Collaborative filtering to capture AI user's preferences as norms
Serramia, Marc, Criado, Natalia, Luck, Michael
Customising AI technologies to each user's preferences is fundamental to them functioning well. Unfortunately, current methods require too much user involvement and fail to capture their true preferences. In fact, to avoid the nuisance of manually setting preferences, users usually accept the default settings even if these do not conform to their true preferences. Norms can be useful to regulate behaviour and ensure it adheres to user preferences but, while the literature has thoroughly studied norms, most proposals take a formal perspective. Indeed, while there has been some research on constructing norms to capture a user's privacy preferences, these methods rely on domain knowledge which, in the case of AI technologies, is difficult to obtain and maintain. We argue that a new perspective is required when constructing norms, which is to exploit the large amount of preference information readily available from whole systems of users. Inspired by recommender systems, we believe that collaborative filtering can offer a suitable approach to identifying a user's norm preferences without excessive user involvement.
- Europe > United Kingdom (0.04)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
The Jiminy Advisor: Moral Agreements among Stakeholders Based on Norms and Argumentation
Liao, Beishui (Zheijang University) | Pardo, Pere (a:1:{s:5:"en_US";s:24:"University of Luxembourg";}) | Slavkovik, Marija (University of Bergen) | van der Torre, Leendert (University of Luxembourg)
An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and interacts with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of an autonomous system. We propose an ethical recommendation component called Jiminy which uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. A Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas that involve the opinions of the stakeholders. First, the Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, the Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, the Jiminy uses context-sensitive rules to decide which of the stakeholders take preference over the others. At the abstract level, these three methods are characterized by adding arguments, adding attacks between arguments, and revising attacks between arguments. We show how a Jiminy can be used not only for ethical reasoning and collaborative decision-making, but also to provide explanations about ethical behavior.
- Europe > Luxembourg (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- (18 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.67)
Synthesis and Properties of Optimally Value-Aligned Normative Systems
Montes, Nieves (Artificial Intelligence Research Institute (IIIA-CSIC)) | Sierra, Carles (Artificial Intelligence Research Institute (IIIA-CSIC))
The value alignment problem is concerned with the design of systems that provably abide by our human values. One approach to this challenge is through the leverage of prescriptive norms that, if carefully designed, are able to steer a multiagent system away from harmful outcomes and towards more beneficial ones. In this work, we first present a general methodology for the automated synthesis of value aligned normative systems, based on a consequentialist view of values. In the second part, we provide analytical tools to examine such value aligned normative systems, namely the Shapley value of individual norms and the compatibility of several values under a fixed set of norms. We illustrate all of our contributions with a running example of a society of agents where taxes are collected and redistributed according to a set of parametrised norms.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York (0.04)
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- (2 more...)
- Law > Taxation Law (0.46)
- Government > Tax (0.46)
A Norm Emergence Framework for Normative MAS -- Position Paper
Morris-Martin, Andreasa, De Vos, Marina, Padget, Julian
Norm emergence is typically studied in the context of multiagent systems (MAS) where norms are implicit, and participating agents use simplistic decision-making mechanisms. These implicit norms are usually unconsciously shared and adopted through agent interaction. A norm is deemed to have emerged when a threshold or predetermined percentage of agents follow the "norm". Conversely, in normative MAS, norms are typically explicit and agents deliberately share norms through communication or are informed about norms by an authority, following which an agent decides whether to adopt the norm or not. The decision to adopt a norm by the agent can happen immediately after recognition or when an applicable situation arises. In this paper, we make the case that, similarly, a norm has emerged in a normative MAS when a percentage of agents adopt the norm. Furthermore, we posit that agents themselves can and should be involved in norm synthesis, and hence influence the norms governing the MAS, in line with Ostrom's eight principles. Consequently, we put forward a framework for the emergence of norms within a normative MAS, that allows participating agents to propose/request changes to the normative system, while special-purpose synthesizer agents formulate new norms or revisions in response to these requests. Synthesizers must collectively agree that the new norm or norm revision should proceed, and then finally be approved by an "Oracle". The normative system is then modified to incorporate the norm.
- South America > Brazil > São Paulo (0.04)
- Europe > United Kingdom > England > Somerset > Bath (0.04)
- Europe > Spain > Valencian Community > Valencia Province > Valencia (0.04)
- (4 more...)
Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders
Liao, Beishui, Slavkovik, Marija, van der Torre, Leendert
An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end-users. We address the challenge of how the moral values and views of all stakeholders can be integrated and reflected in the moral behaviour of the autonomous system. We propose an artificial moral agent architecture that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. We show how our architecture can be used not only for ethical practical reasoning and collaborative decision-making, but also for the explanation of such moral behavior.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Norway > Western Norway > Vestland > Bergen (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (0.68)
To control AI, we need to understand more about humans
From Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will. And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing. For some in AI, like Mark Zuckerberg, AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk, the time to start figuring out how to regulate powerful machine-learning-based systems is now. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg's confidence that we can solve any future problems is contingent on Musk's insistence that we need to "learn as much as possible" now.
Deontic Logic for Human Reasoning
Furbach, Ulrich, Schon, Claudia
Deontic logic is shown to be applicable for modelling human reasoning. For this the Wason selection task and the suppression task are discussed in detail. Different versions of modelling norms with deontic logic are introduced and in the case of the Wason selection task it is demonstrated how differences in the performance of humans in the abstract and in the social contract case can be explained. Furthermore it is shown that an automated theorem prover can be used as a reasoning tool for deontic logic.
Normative Engineering Risk Management Systems
This paper describes a normative system design that incorporates diagnosis, dynamic evolution, decision making, and information gathering. A single influence diagram demonstrates the design's coherence, yet each activity is more effectively modeled and evaluated separately. Application to offshore oil platforms illustrates the design. For this application, the normative system is embedded in a real-time expert system.
- North America > United States > California > San Mateo County (0.15)
- North America > United States > Massachusetts > Middlesex County (0.14)
- Europe > United Kingdom > England (0.14)
- Europe > Germany (0.14)
Abstract Normative Systems: Semantics and Proof Theory
Tosatto, Silvano Colombo (University of Luxembourg) | Boella, Guido (University of Turin) | Torre, Leendert van der (University of Luxembourg) | Villata, Serena (INRIA)
In this paper we introduce an abstract theory of normative reasoning, whose central notion is the generation of obligations, permissions and institutional facts from conditional norms. We present various semantics and their proof systems. The theory can be used to classify and compare new candidates for standards of normative reasoning, and to explore more elaborate forms of normative reasoning than studied thus far.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York (0.04)
- Europe > Netherlands > South Holland > Rotterdam (0.04)
- (2 more...)