Gatti, Marco
The FIX Benchmark: Extracting Features Interpretable to eXperts
Jin, Helen, Havaldar, Shreya, Kim, Chaehyeon, Xue, Anton, You, Weiqiu, Qu, Helen, Gatti, Marco, Hashimoto, Daniel A, Jain, Bhuvnesh, Madani, Amin, Sako, Masao, Ungar, Lyle, Wong, Eric
Feature-based methods are commonly used to explain model predictions, but these methods often implicitly assume that interpretable features are readily available. However, this is often not the case for high-dimensional data, and it can be hard even for domain experts to mathematically specify which features are important. Can we instead automatically extract collections or groups of features that are aligned with expert knowledge? To address this gap, we present FIX (Features Interpretable to eXperts), a benchmark for measuring how well a collection of features aligns with expert knowledge. In collaboration with domain experts, we propose FIXScore, a unified expert alignment measure applicable to diverse real-world settings across cosmology, psychology, and medicine domains in vision, language, and time series data modalities. With FIXScore, we find that popular feature-based explanation methods have poor alignment with expert-specified knowledge, highlighting the need for new methods that can better identify features interpretable to experts.
Sum-of-Parts Models: Faithful Attributions for Groups of Features
You, Weiqiu, Qu, Helen, Gatti, Marco, Jain, Bhuvnesh, Wong, Eric
An explanation of a machine learning model is considered "faithful" if it accurately reflects the model's decision-making process. However, explanations such as feature attributions for deep learning are not guaranteed to be faithful, and can produce potentially misleading interpretations. In this work, we develop Sum-of-Parts (SOP), a class of models whose predictions come with grouped feature attributions that are faithful-by-construction. This model decomposes a prediction into an interpretable sum of scores, each of which is directly attributable to a sparse group of features. We evaluate SOP on benchmarks with standard interpretability metrics, and in a case study, we use the faithful explanations from SOP to help astrophysicists discover new knowledge about galaxy formation.