ALOHa: A New Measure for Hallucination in Captioning Models
Petryk, Suzanne, Chan, David M., Kachinthaya, Anish, Zou, Haodi, Canny, John, Gonzalez, Joseph E., Darrell, Trevor
–arXiv.org Artificial Intelligence
Despite recent advances in multimodal pre-training for visual description, state-of-the-art models still produce captions containing errors, such as hallucinating objects not present in a scene. The existing prominent metric for object hallucination, CHAIR, is limited to a fixed set of MS COCO objects and synonyms. In this work, we propose a modernized open-vocabulary metric, ALOHa, which leverages large language models (LLMs) to measure object hallucinations. Specifically, we use an LLM to extract groundable objects from a candidate caption, measure their semantic similarity to reference objects from captions and object detections, and use Hungarian matching to produce a final hallucination score. We show that ALOHa correctly identifies 13.6% more hallucinated objects than CHAIR on HAT, a new gold-standard subset of MS COCO Captions annotated for hallucinations, and 30.8% more on nocaps, where objects extend beyond MS COCO categories. Our code is available at https://davidmchan.github.io/aloha/.
arXiv.org Artificial Intelligence
Apr-3-2024
- Country:
- Asia > Middle East
- UAE (0.14)
- North America > United States
- California (0.14)
- Maryland (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.70)
- Industry:
- Information Technology (0.46)
- Leisure & Entertainment (0.68)
- Technology: