Goto

Collaborating Authors

 dsa-framework


An Argumentative Explanation Framework for Generalized Reason Model with Inconsistent Precedents

Fungwacharakorn, Wachara, Bourgne, Gauvain, Satoh, Ken

arXiv.org Artificial Intelligence

Precedential constraint is one foundation of case-based reasoning in AI and Law. It generally assumes that the underlying set of precedents must be consistent. To relax this assumption, a generalized notion of the reason model has been introduced. While several argumentative explanation approaches exist for reasoning with precedents based on the traditional consistent reason model, there has been no corresponding argumentative explanation method developed for this generalized reasoning framework accommodating inconsistent precedents. To address this question, this paper examines an extension of the derivation state argumentation framework (DSA-framework) to explain the reasoning according to the generalized notion of the reason model.


An Argumentative Approach for Explaining Preemption in Soft-Constraint Based Norms

Fungwacharakorn, Wachara, Tsushima, Kanae, Hosobe, Hiroshi, Takeda, Hideaki, Satoh, Ken

arXiv.org Artificial Intelligence

Although various aspects of soft-constraint based norms have been explored, it is still challenging to understand preemption. Preemption is a situation where higher-level norms override lower-level norms when new information emerges. To address this, we propose a derivation state argumentation framework (DSA-framework). DSA-framework incorporates derivation states to explain how preemption arises based on evolving situational knowledge. Based on DSA-framework, we present an argumentative approach for explaining preemption. We formally prove that, under local optimality, DSA-framework can provide explanations why one consequence is obligatory or forbidden by soft-constraint based norms represented as logical constraint hierarchies.