Visual Instruction Tuning with Polite Flamingo
Chen, Delong, Liu, Jianfeng, Dai, Wenliang, Wang, Baoyuan
–arXiv.org Artificial Intelligence
Recent research has demonstrated that the multi-task fine-tuning of multi-modal Large Language Models (LLMs) using an assortment of annotated downstream vision-language datasets significantly enhances their performance. Yet, during this process, a side effect, which we termed as the "multi-modal alignment tax", surfaces. This side effect negatively impacts the model's ability to format responses appropriately -- for instance, its "politeness" -- due to the overly succinct and unformatted nature of raw annotations, resulting in reduced human preference. In this paper, we introduce Polite Flamingo, a multi-modal response rewriter that transforms raw annotations into a more appealing, "polite" format. Polite Flamingo is trained to reconstruct high-quality responses from their automatically distorted counterparts and is subsequently applied to a vast array of vision-language datasets for response rewriting. After rigorous filtering, we generate the PF-1M dataset and further validate its value by fine-tuning a multi-modal LLM with it. Combined with novel methodologies including U-shaped multi-stage tuning and multi-turn augmentation, the resulting model, Clever Flamingo, demonstrates its advantages in both multi-modal understanding and response politeness according to automated and human evaluations.
arXiv.org Artificial Intelligence
Dec-15-2023
- Country:
- Asia > Middle East
- Israel (0.14)
- Europe > Austria
- Vienna (0.14)
- North America > United States
- Maryland (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Leisure & Entertainment > Sports (1.00)
- Transportation > Ground (0.67)
- Technology: