Goto

Collaborating Authors

 Sakama, Chiaki


Human Conditional Reasoning in Answer Set Programming

arXiv.org Artificial Intelligence

Given a conditional sentence "P=>Q" (if P then Q) and respective facts, four different types of inferences are observed in human reasoning. Affirming the antecedent (AA) (or modus ponens) reasons Q from P; affirming the consequent (AC) reasons P from Q; denying the antecedent (DA) reasons -Q from -P; and denying the consequent (DC) (or modus tollens) reasons -P from -Q. Among them, AA and DC are logically valid, while AC and DA are logically invalid and often called logical fallacies. Nevertheless, humans often perform AC or DA as pragmatic inference in daily life. In this paper, we realize AC, DA and DC inferences in answer set programming. Eight different types of completion are introduced and their semantics are given by answer sets. We investigate formal properties and characterize human reasoning tasks in cognitive psychology. Those completions are also applied to commonsense reasoning in AI.


Partial Evaluation of Logic Programs in Vector Spaces

arXiv.org Artificial Intelligence

In this paper, we introduce methods of encoding propositional logic programs in vector spaces. Interpretations are represented by vectors and programs are represented by matrices. The least model of a definite program is computed by multiplying an interpretation vector and a program matrix. To optimize computation in vector spaces, we provide a method of partial evaluation of programs using linear algebra. Partial evaluation is done by unfolding rules in a program, and it is realized in a vector space by multiplying program matrices. We perform experiments using randomly generated programs and show that partial evaluation has potential for realizing efficient computation in huge scale of programs.


A Formal Account of Deception

AAAI Conferences

This study focuses on the question: "What are the computational formalisms at the heart of deceptive and counter-deceptive machines?" We formulate deception using a dynamic epistemic logic. Three different types of deception are considered: deception by lying, deception by bluffing and deception by truth-telling, depending on whether a speaker believes what he/she says or not. Next we consider various situations where an act of deceiving happens. Intentional deception is accompanied by a speaker's intent to deceive. Indirect deception happens when false information is carried over from person to person. Self-deception is an act of deceiving the self. We investigate formal properties of different sorts of deception.


Abduction and Conversational Implicature (Extended Abstract)

AAAI Conferences

In this abstract, we first consider abduction in human dialogues. Two different types of abduction, objective abduction and subjective abduction, are introduced and formulated using propositional modal logic. We next formulate conversational implicature in the same logic and contrast it with abduction in dialogues. According to our formulation, abduction uses private belief of a reasoner, while conversational implicature relies on common knowledge between participants in conversation. The results characterize how hearers use abduction or conversational implicatures to figure out what speakers have implicated and show how two commonsense inferences are distinguished.


Confidentiality-Preserving Data Publishing for Credulous Users by Extended Abduction

arXiv.org Artificial Intelligence

Publishing private data on external servers incurs the problem of how to avoid unwanted disclosure of confidential data. We study a problem of confidentiality in extended disjunctive logic programs and show how it can be solved by extended abduction. In particular, we analyze how credulous non-monotonic reasoning affects confidentiality.


An Experiment in Formalizing Commitments Using Action Languages

AAAI Conferences

This paper investigates the use of high-level action languages for representing and reasoning about commitments in mulit-agent domains. The paper introduces the language L mt with features motivates by the problem of representing commitments; in particular, it shows how L mt can handle both simple commitment actions and complex commitment protocols. The semantics of L mt provides a uniform solution to different problems in reasoning about commitments, e.g., the problem of (i) verifying whether an agent fails (or succeeds) to deliver on its commitments; (ii) identifying pending commitments; and (iii) suggesting ways to satisfy pending commitments.