Iterative Self-Improvement of Vision Language Models for Image Scoring and Self-Explanation

Tanji, Naoto, Yamasaki, Toshihiko

arXiv.org Artificial Intelligence 

ABSTRACT Image scoring is a crucial task in numerous real-world applications. To trust a model's judgment, understanding its rationale is essential. This paper proposes a novel training method for Vision Language Models (VLMs) to generate not only image scores but also corresponding justifications in natural language. Leveraging only an image scoring dataset and an instruction-tuned VLM, our method enables self-training, utilizing the VLM's generated text without relying on external data or models. In addition, we introduce a simple method for creating a dataset designed to improve alignment between predicted scores and their textual justifications. By iteratively training the model with Direct Preference Optimization on two distinct datasets and merging them, we can improve both scoring accuracy and the coherence of generated explanations. Index T erms-- Vision language model, Explainable AI, Image scoring, Self-training, Direct Preference Optimization 1. INTRODUCTION Deep learning is revolutionizing image analysis, enabling automated classification and scoring with enhanced accuracy and efficiency. Examples include disease detection in medical images, defect identification in quality control, and predicting advertising effectiveness.