Goto

Collaborating Authors

 s-consistent


Partial Disentanglement via Mechanism Sparsity

Lachapelle, Sébastien, Lacoste-Julien, Simon

arXiv.org Artificial Intelligence

Disentanglement via mechanism sparsity was introduced recently as a principled approach to extract latent factors without supervision when the causal graph relating them in time is sparse, and/or when actions are observed and affect them sparsely. However, this theory applies only to ground-truth graphs satisfying a specific criterion. In this work, we introduce a generalization of this theory which applies to any ground-truth graph and specifies qualitatively how disentangled the learned representation is expected to be, via a new equivalence relation over models we call consistency. This equivalence captures which factors are expected to remain entangled and which are not based on the specific form of the ground-truth graph. We call this weaker form of identifiability partial disentanglement. The graphical criterion that allows complete disentanglement, proposed in an earlier work, can be derived as a special case of our theory. Finally, we enforce graph sparsity with constrained optimization and illustrate our theory and algorithm in simulations.


Doxastic Extensions of \L ukasiewicz Logic

Dastgheib, Doratossadat, Farahani, Hadi

arXiv.org Artificial Intelligence

We propose two new doxastic extensions of fuzzy \L ukasiewicz logic in which their semantics are Kripke-based with both fuzzy atomic propositions and fuzzy accessibility relations. A class of these extensions is equipped with uninformed belief operator, and the other class is based on a new notion of skeptical belief. We model a fuzzy version of muddy children problem and a CPA-security experiment using uniformed belief and skeptical belief, respectively. Moreover, we prove soundness and completeness for both of these belief extensions.