ICX360: In-Context eXplainability 360 Toolkit
Wei, Dennis, Luss, Ronny, Hu, Xiaomeng, Paes, Lucas Monteiro, Chen, Pin-Yu, Ramamurthy, Karthikeyan Natesan, Miehling, Erik, Vejsbjerg, Inge, Strobelt, Hendrik
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have become ubiquitous in everyday life and are entering higher-stakes applications ranging from summarizing meeting transcripts to answering doctors' questions. As was the case with earlier predictive models, it is crucial that we develop tools for explaining the output of LLMs, be it a summary, list, response to a question, etc. With these needs in mind, we introduce In-Context Explainability 360 (ICX360), an open-source Python toolkit for explaining LLMs with a focus on the user-provided context (or prompts in general) that are fed to the LLMs. ICX360 contains implementations for three recent tools that explain LLMs using both black-box and white-box methods (via perturbations and gradients respectively).
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > New York
- New York County > New York City (0.04)
- Canada > Ontario
- Genre:
- Research Report (0.41)
- Industry:
- Transportation (0.37)
- Technology: