A Unified Hallucination Mitigation Framework for Large Vision-Language Models
Chang, Yue, Jing, Liqiang, Zhang, Xiaopeng, Zhang, Yue
–arXiv.org Artificial Intelligence
Hallucination is a common problem for Large Vision-Language Models (LVLMs) with long generations which is difficult to eradicate. The generation with hallucinations is partially inconsistent with the image content. To mitigate hallucination, current studies either focus on the process of model inference or the results of model generation, but the solutions they design sometimes do not deal appropriately with various types of queries and the hallucinations of the generations about these queries. To accurately deal with various hallucinations, we present a unified framework, Dentist, for hallucination mitigation. The core step is to first classify the queries, then perform different processes of hallucination mitigation based on the classification result, just like a dentist first observes the teeth and then makes a plan. In a simple deployment, Dentist can classify queries as perception or reasoning and easily mitigate potential hallucinations in answers which has been demonstrated in our experiments.
arXiv.org Artificial Intelligence
Sep-24-2024
- Country:
- Europe > Switzerland
- North America (0.28)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Health & Medicine > Therapeutic Area (0.81)
- Technology: