Not enough data to create a plot.
Try a different view from the menu above.
State, Radu
Interpreting Finite Automata for Sequential Data
Hammerschmidt, Christian Albert, Verwer, Sicco, Lin, Qin, State, Radu
Automaton models are often seen as interpretable models. Interpretability itself is not well defined: it remains unclear what interpretability means without first explicitly specifying objectives or desired attributes. In this paper, we identify the key properties used to interpret automata and propose a modification of a state-merging approach to learn variants of finite state automata. We apply the approach to problems beyond typical grammar inference tasks. Additionally, we cover several use-cases for prediction, classification, and clustering on sequential data in both supervised and unsupervised scenarios to show how the identified key properties are applicable in a wide range of contexts.