Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning
–Neural Information Processing Systems
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level.
Neural Information Processing Systems
Oct-8-2025, 12:47:39 GMT
- Country:
- Asia > India
- Europe > Belgium
- Brussels-Capital Region > Brussels (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States (1.00)
- Canada > Quebec
- Genre:
- Research Report (0.45)
- Industry:
- Government > Regional Government
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.67)
- Leisure & Entertainment (1.00)
- Media > Film (0.67)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks (0.93)
- Performance Analysis > Accuracy (0.46)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Machine Learning
- Communications > Social Media (0.92)
- Game Theory (0.68)
- Security & Privacy (0.67)
- Artificial Intelligence
- Information Technology