Persia, Cosimo
Verifying Properties of Tsetlin Machines
Przybysz, Emilia, Bhattarai, Bimal, Persia, Cosimo, Ozaki, Ana, Granmo, Ole-Christoffer, Sharma, Jivitesh
Tsetlin Machines (TsMs) are a promising and interpretable machine learning method which can be applied for various classification tasks. We present an exact encoding of TsMs into propositional logic and formally verify properties of TsMs using a SAT solver. In particular, we introduce in this work a notion of similarity of machine learning models and apply our notion to check for similarity of TsMs. We also consider notions of robustness and equivalence from the literature and adapt them for TsMs. Then, we show the correctness of our encoding and provide results for the properties: adversarial robustness, equivalence, and similarity of TsMs. In our experiments, we employ the MNIST and IMDB datasets for (respectively) image and sentiment classification. We discuss the results for verifying robustness obtained with TsMs with those in the literature obtained with Binarized Neural Networks on MNIST.
Mining ℰℒ⊥ Bases with Adaptable Role Depth
Guimarães, Ricardo (Department of Informatics, University of Bergen) | Ozaki, Ana (Department of Informatics, University of Bergen) | Persia, Cosimo (Department of Informatics, University of Bergen) | Sertkaya, Baris (a:1:{s:5:"en_US";s:9:"Prof. Dr.";})
In Formal Concept Analysis, a base for a finite structure is a set of implications that characterizes all valid implications of the structure. This notion can be adapted to the context of Description Logic, where the base consists of a set of concept inclusions instead of implications. In this setting, concept expressions can be arbitrarily large. Thus, it is not clear whether a finite base exists and, if so, how large concept expressions may need to be. We first revisit results in the literature for mining ℰℒ⊥ bases from finite interpretations. Those mainly focus on finding a finite base or on fixing the role depth but potentially losing some of the valid concept inclusions with higher role depth. We then present a new strategy for mining ℰℒ⊥ bases which is adaptable in the sense that it can bound the role depth of concepts depending on the local structure of the interpretation. Our strategy guarantees to capture all ℰℒ⊥ concept inclusions holding in the interpretation, not only the ones up to a fixed role depth. We also consider the case of confident ℰℒ⊥ bases, which requires that some proportion of the domain of the interpretation satisfies the base, instead of the whole domain. This case is useful to cope with noisy data.