InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Huang, Jinbin, He, Wenbin, Gou, Liang, Ren, Liu, Bryan, Chris
–arXiv.org Artificial Intelligence
Deep learning models are widely used in critical applications, highlighting the need for pre-deployment model understanding and improvement. Visual concept-based methods, while increasingly used for this purpose, face challenges: (1) most concepts lack interpretability, (2) existing methods require model knowledge, often unavailable at run time. Additionally, (3) there lacks a no-code method for post-understanding model improvement. Addressing these, we present InterVLS. The system facilitates model understanding by discovering text-aligned concepts, measuring their influence with model-agnostic linear surrogates. Employing visual analytics, InterVLS offers concept-based explanations and performance insights. It enables users to adjust concept influences to update a model, facilitating no-code model improvement. We evaluate InterVLS in a user study, illustrating its functionality with two scenarios. Results indicates that InterVLS is effective to help users identify influential concepts to a model, gain insights and adjust concept influence to improve the model. We conclude with a discussion based on our study results.
arXiv.org Artificial Intelligence
Nov-6-2023
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States (0.28)
- Europe > Austria
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.46)
- Transportation (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.69)
- Performance Analysis > Accuracy (0.71)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning
- Human Computer Interaction (1.00)
- Artificial Intelligence
- Information Technology