Hallucination Detection for Grounded Instruction Generation
Zhao, Lingjun, Nguyen, Khanh, Daumé, Hal III
–arXiv.org Artificial Intelligence
We investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is hallucination: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path. We develop a model that detects these hallucinated references by adopting a model pre-trained on a large corpus of image-text pairs, and fine-tuning it with a contrastive loss that separates correct instructions from instructions containing synthesized hallucinations. Our final model outperforms several baselines, including using word probability estimated by the instruction-generation model, and supervised models based on LSTM and Transformer.
arXiv.org Artificial Intelligence
Oct-23-2023
- Country:
- North America > United States
- California (0.14)
- Maryland (0.14)
- North America > United States
- Genre:
- Research Report (0.82)
- Technology: