Interpretable Cross-Examination Technique (ICE-T): Using highly informative features to boost LLM performance
Muric, Goran, Delay, Ben, Minton, Steven
–arXiv.org Artificial Intelligence
In this paper, we introduce the Interpretable Cross-Examination Technique (ICE-T), a novel approach that leverages structured multi-prompt techniques with Large Language Models (LLMs) to improve classification performance over zero-shot and few-shot methods. In domains where interpretability is crucial, such as medicine and law, standard models often fall short due to their "black-box" nature. ICE-T addresses these limitations by using a series of generated prompts that allow an LLM to approach the problem from multiple directions. The responses from the LLM are then converted into numerical feature vectors and processed by a traditional classifier. This method not only maintains high interpretability but also allows for smaller, less capable models to achieve or exceed the performance of larger, more advanced models under zero-shot conditions. We demonstrate the effectiveness of ICE-T across a diverse set of data sources, including medical records and legal documents, consistently surpassing the zero-shot baseline in terms of classification metrics such as F1 scores. Our results indicate that ICE-T can be used for improving both the performance and transparency of AI applications in complex decision-making environments.
arXiv.org Artificial Intelligence
May-8-2024
- Country:
- Europe (1.00)
- North America > United States
- California > Los Angeles County > Los Angeles (0.14)
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.89)
- Research Report
- Industry:
- Health & Medicine
- Consumer Health (0.95)
- Diagnostic Medicine (0.67)
- Pharmaceuticals & Biotechnology (0.69)
- Therapeutic Area > Psychiatry/Psychology (0.95)
- Law (1.00)
- Health & Medicine
- Technology: