Goto

Collaborating Authors

 epistemic model


Refining Gelfond Rationality Principle Towards More Comprehensive Foundational Principles for Answer Set Semantics

Shen, Yi-Dong, Eiter, Thomas

arXiv.org Artificial Intelligence

Non-monotonic logic programming is the basis for a declarative problem solving paradigm known as answer set programming (ASP). Departing from the seminal definition by Gelfond and Lifschitz in 1988 for simple normal logic programs, various answer set semantics have been proposed for extensions. We consider two important questions: (1) Should the minimal model property, constraint monotonicity and foundedness as defined in the literature be mandatory conditions for an answer set semantics in general? (2) If not, what other properties could be considered as general principles for answer set semantics? We address the two questions. First, it seems that the three aforementioned conditions may sometimes be too strong, and we illustrate with examples that enforcing them may exclude expected answer sets. Second, we evolve the Gelfond answer set (GAS) principles for answer set construction by refining the Gelfond's rationality principle to well-supportedness, minimality w.r.t. negation by default and minimality w.r.t. epistemic negation. The principle of well-supportedness guarantees that every answer set is constructible from if-then rules obeying a level mapping and is thus free of circular justification, while the two minimality principles ensure that the formalism minimizes knowledge both at the level of answer sets and of world views. Third, to embody the refined GAS principles, we extend the notion of well-supportedness substantially to answer sets and world views, respectively. Fourth, we define new answer set semantics in terms of the refined GAS principles. Fifth, we use the refined GAS principles as an alternative baseline to intuitively assess the existing answer set semantics. Finally, we analyze the computational complexity.


Anonymous Public Announcements

Ågotnes, Thomas, Galimullin, Rustam, Satoh, Ken, Tojo, Satoshi

arXiv.org Artificial Intelligence

We formalise the notion of an anonymous public announcement in the tradition of public announcement logic. Such announcements can be seen as in-between a public announcement from ``the outside" (an announcement of $ϕ$) and a public announcement by one of the agents (an announcement of $K_aϕ$): we get more information than just $ϕ$, but not (necessarily) about exactly who made it. Even if such an announcement is prima facie anonymous, depending on the background knowledge of the agents it might reveal the identity of the announcer: if I post something on a message board, the information might reveal who I am even if I don't sign my name. Furthermore, like in the Russian Cards puzzle, if we assume that the announcer's intention was to stay anonymous, that in fact might reveal more information. In this paper we first look at the case when no assumption about intentions are made, in which case the logic with an anonymous public announcement operator is reducible to epistemic logic. We then look at the case when we assume common knowledge of the intention to stay anonymous, which is both more complex and more interesting: in several ways it boils down to the notion of a ``safe" announcement (again, similarly to Russian Cards). Main results include formal expressivity results and axiomatic completeness for key logical languages.


A process algebraic framework for multi-agent dynamic epistemic systems

Aldini, Alessandro

arXiv.org Artificial Intelligence

This paper combines the classical model of labeled transition systems with the epistemic model for reasoning about knowledge. The result is a unifying framework for modeling and analyzing multi-agent, knowledge-based, dynamic systems. On the modeling side, we propose a process algebraic, agent-oriented specification language that makes such a framework easy to use for practical purposes. On the verification side, we define a modal logic encompassing temporal and epistemic operators.


Logic of Awareness for Nested Knowledge

Kubono, Yudai

arXiv.org Artificial Intelligence

Reasoning abilities of human beings are limited. Logics that treat logical inference for human knowledge should reflect these limited abilities. Logic of awareness is one of those logics. In the logic, what an agent with a limited reasoning ability actually knows at a given moment (explicit knowledge) is distinguished from the ideal knowledge that an agent obtains by performing all possible inferences with what she already knows (implicit knowledge). This paper proposes a logic for explicit knowledge. In particular, we focus more on nested explicit knowledge, which means another agent's knowledge that an agent actually knows at a given moment. We develope a new formalization of two ideas and propose Kripke-style semantics. The first idea is the effect on an agent's reasoning ability by a state of an agent's awareness. We incorporate a relation on possible worlds called an indistinguishable relation to represent ignorance due to lack of awareness. The second idea is a state of each agent's awareness in the other agent's mind. We incorporate a non-empty finite sequence of agents called \textit{a chain of belief for awareness}. Our logic is called Awareness Logic with Partitions and Chains (ALPC). Employing an example, we show how nested explicit knowledge is formalized with our logic. Thereafter, we propose the proof system and prove the completeness. Finally, we discuss directions for extending and applying our logic and conclude. Our logic offers a foundation for a formal representation of human knowledge. We expect that the logic can be applied to computer science and game theory by describing and analyzing strategic behavior in a game and practical agent communication.


EpiK-Eval: Evaluation for Language Models as Epistemic Models

Prato, Gabriele, Huang, Jerry, Parthasarathi, Prasannna, Sodhani, Shagun, Chandar, Sarath

arXiv.org Artificial Intelligence

In the age of artificial intelligence, the role of large language models (LLMs) is becoming increasingly central. Despite their growing prevalence, their capacity to consolidate knowledge from different training documents - a crucial ability in numerous applications - remains unexplored. This paper presents the first study examining the capability of LLMs to effectively combine such information within their parameter space. We introduce EpiK-Eval, a novel question-answering benchmark tailored to evaluate LLMs' proficiency in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations across various LLMs reveal significant weaknesses in this domain. We contend that these shortcomings stem from the intrinsic nature of prevailing training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs. Our code and benchmark are available at https://github.com/chandar-lab/EpiK-Eval


Communication Pattern Logic: Epistemic and Topological Views

Castañeda, Armando, van Ditmarsch, Hans, Rosenblueth, David A., Velázquez, Diego A.

arXiv.org Artificial Intelligence

We propose communication pattern logic. A communication pattern describes how processes or agents inform each other, independently of the information content. The full-information protocol in distributed computing is the special case wherein all agents inform each other. We study this protocol in distributed computing models where communication might fail: an agent is certain about the messages it receives, but it may be uncertain about the messages other agents have received. In a dynamic epistemic logic with distributed knowledge and with modalities for communication patterns, the latter are interpreted by updating Kripke models. We propose an axiomatization of communication pattern logic, and we show that collective bisimilarity (comparing models on their distributed knowledge) is preserved when updating models with communication patterns. We can also interpret communication patterns by updating simplicial complexes, a well-known topological framework for distributed computing. We show that the different semantics correspond, and propose collective bisimulation between simplicial complexes.


De Re and De Dicto Knowledge in Egocentric Setting

Naumov, Pavel, Ovchinnikova, Anna

arXiv.org Artificial Intelligence

Traditionally, the satisfaction relation in modal logic is defined as a relation w φ between a possible world w and a formula φ. In such a setting, formula φ expresses a property of possible worlds. For example, statement w "There are black holes" expresses the fact that world w has a property of containing black holes. It is also possible to consider logical systems that capture properties of agents rather than of possible worlds. In such systems, satisfaction relation a φ is a relation between an agent a and a formula φ.


A Meta-Reinforcement Learning Algorithm for Causal Discovery

Sauter, Andreas, Acar, Erman, François-Lavet, Vincent

arXiv.org Artificial Intelligence

Causal discovery is a major task with the utmost importance for machine learning since causal structures can enable models to go beyond pure correlation-based inference and significantly boost their performance. However, finding causal structures from data poses a significant challenge both in computational effort and accuracy, let alone its impossibility without interventions in general. In this paper, we develop a meta-reinforcement learning algorithm that performs causal discovery by learning to perform interventions such that it can construct an explicit causal graph. Apart from being useful for possible downstream applications, the estimated causal graph also provides an explanation for the data-generating process. In this article, we show that our algorithm estimates a good graph compared to the SOTA approaches, even in environments whose underlying causal structure is previously unseen. Further, we make an ablation study that shows how learning interventions contribute to the overall performance of our approach. We conclude that interventions indeed help boost the performance, efficiently yielding an accurate estimate of the causal structure of a possibly unseen environment.


Learning to Act and Observe in Partially Observable Domains

Bolander, Thomas, Gierasimczuk, Nina, Liberman, Andrés Occhipinti

arXiv.org Artificial Intelligence

We consider a learning agent in a partially observable environment, with which the agent has never interacted before, and about which it learns both what it can observe and how its actions affect the environment. The agent can learn about this domain from experience gathered by taking actions in the domain and observing their results. We present learning algorithms capable of learning as much as possible (in a well-defined sense) both about what is directly observable and about what actions do in the domain, given the learner's observational constraints. We differentiate the level of domain knowledge attained by each algorithm, and characterize the type of observations required to reach it. The algorithms use dynamic epistemic logic (DEL) to represent the learned domain information symbolically. Our work continues that of Bolander and Gierasimczuk (2015), which developed DEL-based learning algorithms based to learn domain information in fully observable domains.


Situated Conditional Reasoning

Casini, Giovanni, Meyer, Thomas, Varzinczak, Ivan

arXiv.org Artificial Intelligence

Conditionals are useful for modelling, but are not always sufficiently expressive for capturing information accurately. In this paper we make the case for a form of conditional that is situation-based. These conditionals are more expressive than classical conditionals, are general enough to be used in several application domains, and are able to distinguish, for example, between expectations and counterfactuals. Formally, they are shown to generalise the conditional setting in the style of Kraus, Lehmann, and Magidor. We show that situation-based conditionals can be described in terms of a set of rationality postulates. We then propose an intuitive semantics for these conditionals, and present a representation result which shows that our semantic construction corresponds exactly to the description in terms of postulates. With the semantics in place, we proceed to define a form of entailment for situated conditional knowledge bases, which we refer to as minimal closure. It is reminiscent of and, indeed, inspired by, the version of entailment for propositional conditional knowledge bases known as rational closure. Finally, we proceed to show that it is possible to reduce the computation of minimal closure to a series of propositional entailment and satisfiability checks. While this is also the case for rational closure, it is somewhat surprising that the result carries over to minimal closure.